Blog

microservices

The persistent microservice

Henk Dieter
Henk Dieter

The point of setting up a miniservice architecture is to enable horizontal scaling, improve reusability and to speed up development by separating each domain into an independent application. Miniservices depending on a database pose a number of challenges. We'll explore a couple of them. How to keep data consistent in a distributed system, keeping the system scalable and its domains separated? And how well does the traditional monolithic relational database do in terms of scalability and separation of concerns? This article is the last part in a series of three, the first being 'From monolith to miniservices', the second 'Transitioning to a miniservice architecture' Upsides of monolithic databases Monolithic relational databases are popular and established systems that are well supported by many programming languages and frameworks. By defining relationships within the database system, keeping data consistent is relatively straightforward. Easily coupling related entity records is one of the strengths of the relational database. Relational databases support ACID transactions, which guarantee data validity in flows where multiple entities need to be updated in an atomic way. In the event of a failure, ACID transactions can be rolled back entirely, keeping the database state consistent. Keeping all data in the same place simplifies management. There is no need for knowledge of multiple database management systems. Downsides of monolithic databases Horizontally scaling monolithic databases is hard. Horizontal scaling of monolithic relational databases is usually set up using a master-slave replication model. In this model, the single master server handles all write transactions and propagates state updates to multiple slave servers. Read transactions are handled by the slave. The master-slave model does enable handling more read transactions, but write transactions are not scaled up. An alternative replication model is the asynchronous multi-master model. The multi-master model allows for distributing read, as well as write transactions, over multiple servers. Keeping data consistent in this model is relatively challenging and ACID transactions are not supported. Inconsistencies cannot be prevented, but have to be resolved afterwards in order to gain an eventual consistent state. Another possibility to scale traditional monolithic database systems is horizontal sharding, where each table is horizontally partitioned and distributed over many database servers. This increases the number of read transactions as well as the number of write transactions that can be handled. In this setup, a database server failing causes a whole table to become unavailable or at least incomplete if no complex failover system is used. Monolithic databases are used for many tasks at the same time. Some tasks require joining data, some reading many rows and some require a lot of writes. The data being stored might not adhere to any structure. General-purpose databases by definition are not optimized for handling any single task. Challenges with monolithic databases in miniservice architectures Using a monolithic relational database as a single source of truth within a miniservice architecture introduces another set of challenges. Enforcing parts in the distributed system to only modify data within their domain is one of them. Failing to separate the data into separate domains can cause bugs in systems where services work asynchronously to each other. Executing database migrations concurrently isn't easy either. Especially when different services are maintained by different software development teams, concurrent migrations have to be done very carefully and are error-prone. Also, database migrations tend to introduce downtime of the database management system, especially when a large amount of data is to be manipulated. This would disable every miniservice that depends on the database, even though they run independently from each other. Setup But how to set up persistent miniservices elegantly? There's no single one-size-fits-all solution for this. Depending on the project context, different strategies of keeping data persistent within a miniservice architecture may be preferred. Roughly, these strategies are the following: Single database schema for many services. This strategy makes joins and data consistency easy, but the extensive dependency of data in relational databases causes updates across the whole system to become hard. One schema per service Restricts table access to a single service, keeping domains separated, but does not require setting up multiple databases. Cross-domain joining data becomes harder. This might be either a good or a bad thing, depending on the context. One database per service allows for choosing job-specific database management systems and separate development teams using different configurations. This is harder to set up, but enables all the good things miniservice architectures have to offer. Eventual consistency In practice, when data is separated in domains over many database servers, atomic transactions are not possible without extensive locking. There might not even be a consistent state at all times. Instead of using ACID transactions and locking, a mechanism should be introduced to gain an eventually consistent state. One interesting way to do this is by implementing the event sourcing pattern. In this pattern, all database mutations are stored in an event store, which publishes them to the system. The events should contain all information for an antagonist event to be created in case its effects need to be undone. Services in the distributed system can subscribe to these events and act accordingly. Whenever a series of events needs to be rolled back, antagonist events are emitted, stored, and handled to regain consistency. || |:--:| |Event sourcing pattern | An example of a system implementing the event sourcing pattern. The miniservices communicate via the central event store, which keeps record of each mutation in the system. The event store propagates the events received from a service to subscribers. Every event being saved, transactions can be rolled back if necessary. | Using the event sourcing pattern, services can easily send status updates to their clients even before all parts in the transaction have been executed, notifying them of anything interesting happening in the process. Conclusion Setting up a system consisting of miniservices that depends on persistent state, is not an easy task. As your application becomes larger and more complex, moving from a monolithic database to a set of separated data sources, becomes a necessity. Depending on the project, one can go as far as they want with that. If you do choose to have each service maintain their own data, a good way of keeping your data consistent is by introducing the event store pattern. Although introducing overhead and complexity, it helps keeping concerns separated. As Tweede golf is moving it's applications to use the miniservices in a service-oriented architecture setting, we won't be using the event sourcing pattern just yet, and we'll go for the simpler one-schema-per-service option. But you never know, maybe some time in the future the event sourcing pattern might be of use! Once we have more experience on deploying complex miniservice architectures, we'll take another look at this exploration series.

Transitioning to a miniservice architecture

Henk Dieter
Henk Dieter

Imagine this: you have made the wise choice of taking the monolith-first approach, setting up an application. You have read my previous article about the pros and cons of miniservices. And now, the time has come to start the transition to a miniservice architecture. How do you go about that? Below are a couple of things to be aware of as well as a number of steps that can be taken in order to migrate a monolith to a miniservice architecture. This article is the second part in a series of three, the first being 'From monolith to miniservices', the third 'The persistent microservice' Pitfalls No common understanding and goal between teams and their individual members For the transition to succeed, it's vital for teams and their individual members to work towards the same goal as well as use the same strategy to get there. Think about what exactly you want to accomplish by transitioning to a miniservice architecture. Do you want shorter release cycles? Replaceability? Or do you just want to go with the latest fashion in architecture? Also, assess whether your team members possess enough knowledge about miniservices and their goals. Keeping in mind Conway's law, you should take a careful look at your organisation and update any process or structure that wouldn't work well with running many independent teams. Too-tightly-coupled parts If the monolith is not already set up in a modular way, transitioning to a miniservice architecture can become very hard. Refactoring the monolith to more clearly separate modules should be done first. Where possible, a module should only cover a single domain. Take care, though, as refactoring usually introduces unforeseen bugs. Stateful to stateless transition Miniservices should not depend on persistent state and as such the monoliths functionalities should not depend on persistent state, apart from a loosely coupled database system or an abstracted-away cloud storage service. Miniservices depending on local storage are hard to scale horizontally, as that storage would need to be kept in sync. Miniservices should only access each other's data via APIs, so that each miniservices has clear ownership of their domain. Integration testing after each split If a monolithic application is not already being integration tested before transitioning, these tests should be set up in advance. As refactoring oftentimes introduces bugs, you should assure yourself that the system works well in its entirety, before deploying. Where possible, automate these tests. Computers don't get bored with testing, humans might become less accurate over time. Starting from scratch One might think it's always a good idea to throw away all legacy code in favor of a new architecture. It does give you the advantage that you don't have to live with the consequences of past decisions. However, accurately estimating the amount of work rebuilding a (large) application is hard and often very much underestimated. It's vital that the transition be manageable for it to succeed. Steps to enlightenment Set up integration tests As the application will be quite heavily in flux during the transition, a large amount of certainty in the correct functioning of the application as a whole should be created by systematically testing the system as a whole. Gradually refactor the monolith, decoupling more functionalities within Decouple modules and domains as much as possible. Select a simple and easy-to-isolate functionality of the monolith Don't go big. Single out fairly loosely coupled modules that are not too critical. Work in atomic steps: have only one module or domain be transitioned at a time. Give yourself time to learn how to migrate a part successfully. Create a miniservice that has that same functionality Having identified a single task that can be transitioned, build a new miniservice that has this task as its single responsibility. Some guidelines to take into consideration: Avoid or at least minimize dependencies of the miniservice to the monolith. 'Avoid the anti-pattern of only decoupling facades: only decoupling the back end service and never decoupling data.' Tightly-coupled data slows down development and refactoring of individual miniservices. This can be done in a number of ways. Have each miniservice maintain ownership of private tables in a schema, set up one schema per service or, introduce a new database management system instance for each service. You could use the saga pattern to avoid database inconsistencies instead of two-phase commits. Prefer rewriting the functionality over reusing the monolith's code if at all realistic. This way, you can use the right tools for the job as well as update the process the functionality is used for. Watch out for the IKEA effect. Don't be afraid to throw away code you are proud of if necessary. Test the miniservice and test it thoroughly This is vital: as monoliths might have a lot of interconnected functionality, finding out about introduced bugs should be done as early and thoroughly as possible. Deploy miniservices somewhere deployment is easy Once the monolith is split up into many smaller services, running deployments becomes much more frequent. This should be taken into consideration from the start of the transition. Choose a simple per-service process and automate each step. You could look into setting up a kubernetes cluster, for example. Connect the monolith with miniservice, and remove the functionality from the monolith in one go Omitting the removal of legacy code results in a harder to maintain the system as a whole. Run the integration tests and update Test before deploying, and if possible, test the system with a limited number of real-world users as well before deploying to the whole user base. Reflect on the process Make it easier for yourself and your team members to learn from your mistakes, in order to be able to tackle more elaborate transitions. Keep a list of problems you encountered and precautions you'll take to avoid them in the future. Go back to step 2, until there is no more monolith left Profit! Conclusion We think that the monolith-first approach to setting up a system of miniservices is best. This approach gives you time to stabilize the requirements and identify the domains in which an application can be separated. Having established clear, small steps and beginning with easy migrations, gradually transitioning a monolith to a miniservice architecture seems the most sensible way. Introducing a miniservice architecture involves a possibly drastic change in the way databases are operated. In my next blog post, this will be elaborated on in more detail.

From monolith to miniservices (indeed, not microservices)

Henk Dieter
Henk Dieter

Tweede golf has built quite a few big web applications over the last ten years. One of our specialties being the development of Symfony applications, some of these applications have become massive, with a lot of separate functionality baked into a single monolith. For now, this situation is being contained as we've been strict about minimizing technical debt. In practice, however, it's extremely hard to completely avoid accumulation of technical debt, which is one of the reasons we have started looking into introducing microservice architectures into our projects. This article is the first part in a series of three, the second being 'Transitioning to a miniservice architecture', the third 'The persistent microservice' Monolithic applications 'What's wrong with a good old monolithic application?', one might ask. Initially, not that much, actually. Especially in the initial phase of a software project, when the context and requirements and separate domains might not be entirely clear, monolithic applications provide the flexibility to refactor large parts of a project in a relatively easy way. Monoliths are quite easy to test. After a couple of months of development, the upsides of monolithic applications become less apparent. Refactoring code becomes progressively harder to do, and updating dependencies might introduce a lot of bugs that are hard to anticipate. As the user base of an application becomes larger and more is asked from the hardware running the application, scaling up becomes a necessity. Monolithic applications being complex beasts, scaling vertically is the only choice. One can only get so far by renting a higher-tier server. Monoliths are fragile. A small bug anywhere in the system, even some barely used non-critical part of it, might cause the complete application to fall over and introduce downtime. This can potentially affect many, many users, and deploying a patch or update can be hard.. And then there's deploying database updates. Monolithic applications often store a large portion of their data in a single relational database. When persistent data is to be updated, the service might be unavailable for a significant amount of time. Users might lose their work, or worse, their interest, as a result. Microservices and miniservices Microservice architectures are developed to tackle these problems. Classic microservices typically handle only a very small isolated task within the application, following the mantra of 'Do one thing, and do it well'. They're stateless and loosely coupled to one another. There might be a microservice for user authentication, one for sending e-mails and another for rendering thumbnails. Implemented properly, these systems are easy to scale horizontally: just identify bottleneck services and launch more instances accordingly. They are loosely coupled, enabling isolation of bug impact to single domains of the applications and can be updated without introducing downtime. || |:--:| |Monolith vs. microservices| | Monolith vs. microservices | Doing one thing, and doing it well is no universal solution, though. Integration and end-to-end testing a system with many microservices is rather difficult, even though testing a single microservice is simple. Deployment of microservices demands specialized knowledge about containerization and orchestration, or, if you're not into that, other complex deployment techniques. For a large project, this may introduce a considerable amount of overhead. Enter miniservices: a balance between monoliths and microservices. The classic microservice being fine-grained, we think that the domains within which the services operate should not be smaller than necessary. For example, we won't make a distinction between microservices generating HTML and another rendering that HTML into a PDF. Miniservices collect the data, and render the page in one go. However, the PDF rendering miniservice will not be doing any user authentication. This keeps the system as a whole maintainable, flexible as well as scalable. || |:--:| |Microservices vs. miniservices| | Microservices vs. miniservices | Pros How do you decide whether you should set up a monolith or a miniservice system? We've been looking into that and have come up with a couple of guidelines that can be used to find out whether miniservices are a good fit for your project. Here are some of the good things miniservice architectures may bring to your project: Dependency management Whenever an application has a set of dependencies, these always resolve to a tree of all the packages that need to be installed. When any dependency does not keep track of progress in the ecosystem, those dependencies become stale. In the case of strict semantic versioning, this means that the monolith as a whole is stuck on these stale dependencies. This is a problem due to either missing new functionality from new versions of your libraries, or missing security patches. In that case, either the dependency needs to be replaced with some equivalent functionality, the functionality needs to be removed, or the functionality needs to be split out to another application. Miniservices enable splitting up the list of dependencies. This keeps the amount of stale or complex dependencies contained. Independent development Miniservices can be developed independently from each other. As domains are separated clearly, having established API contracts, teams only have to worry about the implementation of the tasks within their domain. Independence of technology Monoliths depend on a single core technology, typically a single stack such as PHP+Symfony or Node. However, applications have varying requirements of their technology, depending on the required functionality. PHP is fine to define CRUD operations, but PHP is not sufficient when implementing graphics operations or processing large volumes of data. Following the creed pick the best tool for the job, mini-services enable a disjoint set of technologies to be employed for a project. For small, highly specialized functionalities, programming languages such as C++, Rust or Go can be employed to leverage their performance or correctness characteristics. Replaceability At Tweede golf we sometimes quickly prototype software, with the intention to implement a proper alternative later when adequate funding is secured. Often these decisions stem from business- or maintainability considerations. This is nearly impossible to do neatly when the project is implemented as a monolith, but trivial when the mini-service approach is taken. Reusability Certain problems are solved in multiple projects and don't need a tailor-made solution. These problems include rendering PDF files, thumbnailing images and sending e-mails. Currently, each project team solves these problems separately. These solutions can be developed once and deployed individually as a miniservice, but maintained commonly by a distinct team. The costs of the development of these functionalities can be shared across projects. In order to develop and maintain such a miniservice very specific knowledge might be required. Not every team has the luxury or budget of properly building out that functionality. A prime example of this is maintaining a mailserver, which is error-prone and difficult work, but which can easily be shared between projects. Development and release cycle duration Even though a set of miniservices as a whole is harder to deploy than a monolithic application, a large benefit comes from being able to individually deploy parts of your project. It is no longer required to always build and deploy the entire project. This can enable saving time in your development and release cycle. Also, the release cycle becomes simpler and therefore easier to maintain. Traceability Tracing the operations of a monolithic application involves intensive logging in all parts of the application. A fatal error might cause an execution log to be incomplete or even absent in some cases. This makes post-mortem inspection and debugging of a running application hard. An application built on a miniservice architecture exposes obvious interfaces in the system that are both accessible and uniform, namely the APIs between the miniservices. Logging communications or communication metadata between miniservices leaves a transparent trace of system operation. Resource management Resource usage, like memory or CPU, is hard to pin down on certain parts in a monolith, while individual mini-services make it straightforward to track. This enables more transparent tracking of excessive resource usage or other resource-related problems. Robustness A monolith's failure often propagates through the complete system. A miniservice architecture makes it more natural to isolate faults within their respective miniservice without unnecessarily impacting other parts of the system. The miniservice architecture forces the developer to implement error handling and fault tolerance when interacting with other parts of the system via an API. When implementing functionality that is prone to failures, like generating a PDF document, it is hard to handle all possible errors within a monolith PHP application. Especially when some of the possible errors cause the PHP thread to crash. Isolating this service makes it easier to recover from these errors. A fatal error could just cause some parts of the application being unavailable, while others can still keep working. Cons Like microservices, miniservices aren't perfect. Below are some downsides to consider. End-to-end testing End-to-end testing a miniservice architecture is hard. Running every service in the system locally can become a pain. If not properly set up, developers need to check out the correct code from source control and ensure all dependencies are of the correct version. Tools like docker-compose can be of great help here, though. Debugging In a system consisting of many miniservices, sometimes it's quite difficult to find out the cause of errors in a production setting. It's hard to recreate the state the system was in when it failed. Therefore, extensive logging should be built-in through all parts of the system. Communication overhead When multiple development teams are working on the same application, they must agree on API contracts beforehand and whenever an API is updated. To streamline these communications, a cross-project lead is required. Type system and IDE hints As not all code can reside in a single repository, developers cannot easily take advantage of type systems and IDE hints for inter-service communications. Especially when multiple languages are used, defining entity models might cause some discrepancies. Knowledge Maintaining multiple software stacks introduces a higher cost. Not every team will be able to contribute to all parts of the project. There's a need for knowledge about orchestrating many containers in a cloud setting, using tools such as Kubernetes. Conclusion Miniservice based applications are great. They're robust, scalable and multiple teams can work on them simultaneously. They are a good fit for certain large projects. But setting up such an application is not easy. Companies need to be mature as communication between teams and individual team members is vital. Setting up a good workflow for working on the projects needs to be thought out well. Also, specialist knowledge is required to get the system running smoothly. 'I like miniservices! But how do I start?'. My next blog post contains an incomplete how-to-migrate-from-monolith-to-miniservices to get you started.

Rust als webplatform?!

Wouter
Wouter

Wat is over 5 jaar het winnende open source webplatform? Inmiddels begint PHP - ondanks PHP7 - zijn ouderdom te tonen. Alternatieven als NodeJS zijn wel werkbaar, maar in lang niet alle scenario’s geschikt. We vertellen je graag waarom wij denken dat Rust de nieuwe speler kan worden voor high-performance applicaties op het web. Ja, PHP is oud: het barst van de legacy code, de snelheid laat te wensen over, en er is eigenlijk nooit sprake geweest van een net ontwerp van de taal \[1\]. Wij gebruiken PHP - d.w.z. het backend framework Symfony - omdat het ecosysteem, bestaande uit de tooling en beschikbare open source bundles, inmiddels heel volwassen is. Wil je snel een goed schaalbare web app bouwen met open source technieken? Dan is PHP met Symfony momenteel nog steeds The Way To Go. Kijken we verder in de toekomst en nemen we de steeds hogere eisen aan security en performance mee, dan is het duidelijk tijd voor ons om op zoek te gaan naar een moderner, veiliger en sneller alternatief voor PHP/Symfony. Waarom geen Node.js? Javascript is booming. Nieuwe open source projecten rondom het Javascript-platform schieten als paddenstoelen uit de grond. Helaas wordt Node.js veelal geplaagd door vergelijkbare problemen als PHP. Het is óók heel traag. De tooling rondom Javascript is in elkaar gebeund \[2\]. En ook het taalontwerp is langzaamaan gegroeid, wat de taal niet altijd even logisch maakt \[3\]. Browsers hebben dan ook moeite om deze ontwikkelingen bij te houden \[4\]. Nog meer dan bij PHP is het lastig om correcte code in Javascript te schrijven. De opkomst van (gecompileerde) talen voor type annotaties in Javascript \[5\] \[6\] helpen hier een beetje bij. Deze annotaties geven echter geen feitelijke garanties wanneer je de code draait, en zijn daarmee eerder een pleister over het probleem dan een echte oplossing. Rust Een snelle opkomer is de programmeertaal Rust \[7\]. Deze taal bestaat sinds 2010, en wordt actief gesponsord door Mozilla. In de basis is het een systeemtaal: het lijkt erg op C++, en het genereert ook een uitvoerbaar bestand met AMD64 assembly. Het is dus geen scriptingtaal zoals PHP en Javascript. Voordeel hiervan is dat je code volledig gecontroleerd is, voordat er ook maar iets uitgevoerd wordt. Zo heb je meer zekerheid dat wat op je servers draait ook ècht goed is. Veiliger dan C++, sneller dan Go Ditzelfde geldt ook voor de C++, maar bij C++ kan gemakkelijk ongeldig geheugen geadresseerd worden. Het komt vrij vaak voor dat er achteraf buffer overflows in C++-code blijken te zitten. Dit probleem is zo goed als onmogelijk als de code wordt geschreven in Rust, dankzij een goed ontwerp van de standaard library en de aanwezigheid van het Borrow Checker-mechanisme \[8\]. Dit component controleert bij het compileren dat er alleen kan worden gewerkt met bestaande objecten in goed georganiseerde stukken geheugen. Garbage Collection is het gangbare alternatief voor het Borrow Checker-mechanisme. Een Garbage Collected taal zoals Go biedt op die manier vergelijkbare voordelen als Rust. Helaas heeft Garbage Collection performance-nadelen, en geeft het minder grip op geheugengebruik. Samengevat is Rust dus minstens zo snel als C++, en heeft het bovendien de correctheidsgaranties van talen als Java, Go en functionele programmeertalen. Dat maakt Rust zeer geschikt voor toepassingen waarbij performance en security cruciaal zijn. Deze garanties komen ook goed uit bij het ontwikkelen van embedded systemen zoals in een Internet of Things-context. Rust compileert naar LLVM, en kan dus ook code genereren voor ARM chipsets \[9\]. Nog een klein zijspoortje naar Go, omdat het gezien wordt als een uitdager voor Rust: Bij Tweede golf kiezen we niet voor Go o.a. omdat we het typeringssysteem te beperkt vinden. Zo ontbreekt het bij Go aan generics, waardoor de correctheidsgaranties toch minder sterk zijn in vergelijking met Rust. Er valt veel meer te zeggen over deze afweging, maar dit is geen "Rust vs. Go"-artikel \[10\]. Rust als webplatform Als we de Fibonacci-reeks kunnen uitrekenen hebben we nog geen webapplicatie. Eerst hebben we een webplatform voor Rust nodig dat de functionaliteiten biedt die normaal Symfony voor ons regelt. Dit webplatform bestaat simpelweg nog niet \[11\] \[12\]. Omdat we niet bang zijn om onze handen vuil te maken, zijn we een paar maanden geleden begonnen zelf zo'n raamwerk voor backends te ontwikkelen. Hiermee hopen we de drempel om Rust te gebruiken weg te nemen, in eerste instantie voor onze eigen ontwikkelaars. Ons raamwerk bestaat uit en gebruikt: REST, HTTP server en routering: Rocket \[13\] Input validatie: Serde \[14\] ORM / Database: Diesel \[15\] + Postgres Authenticatie: JSON Web Tokens met Medallion \[16\] Onze ervaringen tot nu toe? Wat we ten eerste merken is dat Rust en Rocket moeiteloos onze performance-eisen halen \[17\]. Verder wordt de documentatie van elk stuk Rust-software via Cargo \[18\] met een eenduidige stijl gegenereerd. Er is dus een basisniveau van referentiemateriaal voor alles in het Rust-ecosysteem. We zien ook dat elk van deze libraries en Rust als taal een heel levendige community hebben. Issues en merge requests worden ongelofelijk snel opgepikt en opgelost. Zijn er ook nadelen? Rust voor web klinkt dus veelbelovend. Soms is de community wel net iets tè levendig: onder andere Diesel heeft tijdens de ontwikkeling van ons platform ten minste eenmaal de API volledig omgegooid. Hopelijk kalmeert dit wanneer libraries meer in productie worden ingezet. Ook heeft Rust een steile leercurve: het is moeilijk om de Borrow Checker gerust te stellen dat het geheugen overal goed wordt gebruikt. Even snel een web app programmeren in Rust zit er hierdoor waarschijnlijk niet in. En ook niet iedere programmeur zal affiniteit kunnen kweken en in de taal kunnen werken. Toekomst voor Rust Hoewel we bij Tweede golf erg gecharmeerd zijn van Rust, zullen we voor minder veeleisende applicaties Node.js als backend platform blijven gebruiken. Ook Symfony zal voorlopig belangrijk blijven. Voor high performance backendsystemen die degelijk en duurzaam moeten zijn denken we echter dat Rust de toekomst is. In een volgende blogpost zullen we dieper ingaan op de ontwikkeling van ons Rust webplatform. Overweeg je Rust al voor je product (applicatie of embedded systeem) of wil je de mogelijkheden verkennen? Onze teams helpen je met prototyping of implementatie. Neem contact op met Erik of Hugo. Referenties: \[1\] https://whydoesitsuck.com/why-does-php-suck/ \[2\] https://ponyfoo.com/articles/npm-meltdown-security-concerns \[3\] https://www.destroyallsoftware.com/talks/wat \[4\] https://caniuse.com/ \[5\] https://coffeescript.org/ \[6\] http://www.typescriptlang.org/ \[7\] https://www.rust-lang.org/ \[8\] https://doc.rust-lang.org/1.8.0/book/references-and-borrowing.html \[9\] http://blog.japaric.io/quickstart/ \[10\] http://julio.meroh.net/2018/07/rust-vs-go.html \[11\] https://www.arewewebyet.org/ \[12\] https://github.com/flosse/rust-web-framework-comparison \[13\] https://rocket.rs/ \[14\] https://serde.rs/ \[15\] https://diesel.rs/ \[16\] https://docs.rs/medallion/2.2.3/medallion/ \[17\] https://medium.com/sean3z/rest-api-node-vs-rust-c75aa8c96343 \[18\] https://doc.rust-lang.org/beta/rustdoc/what-is-rustdoc.html