My notes on Fielding

These are my notes on two things:
  • A youtube videoclip titled "Roy T. Fielding: Understanding the REST Style"
  • Roy T. Fielding's famous Ph.D. dissertation "Architectural Styles and the Design of Network-based Software Architecture".


These are my notes on "Roy T. Fielding: Understanding the REST Style" youtube videoclip:

No technical information.

"It's really an accessible piece of work.  It is not full of equations.  There is one equation.  The equation is there just to have an equation, by the way."


These are my notes on "Architectural Styles and the Design of Network-based Software Architecture" Ph.D. Dissertation by Roy Thomas Fielding, U.C. Irvine.

What follows are excerpts from the dissertation, with my notes usually in parentheses.

Roy Thomas Fielding is: chief Scientist in some tech company; Chairman, Apache Software Foundation; Visiting Scholar, W3C @ MIT CS Lab; etc; Publications, Honors, Awards, Fellowships etc. Involved in the authoring of the Internet standards for the Hypertext Transfer Protocol (HTTP) and Uniform Resource Identifiers (URI).


"The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system."
(He makes it sound as if it was designed this way on purpose.)

"In order to identify [...] aspects of the Web that needed improvement and avoid undesirable modifications, a model for the modern Web architecture was needed to guide its design, definition, and deployment."

(So, he admits the need to build a model after the fact.)

"An architectural style is a named, coordinated set of architectural constraints."

This dissertation defines a framework for understanding software architecture via architectural styles and demonstrates how styles can be used to guide the architectural design of network-based application software.

A survey of architectural styles for network-based applications is used to classify styles according to the architectural properties they induce on an architecture for distributed hypermedia.

I then introduce the Representational State Transfer (REST) architectural style and describe how REST has been used to guide the design and development of the architecture for the modern Web.

REST emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems.

I describe the software engineering principles guiding REST and the interaction constraints chosen to
retain those principles, contrasting them to the constraints of other architectural styles.

Finally, I describe the lessons learned from applying REST to the design of the Hypertext Transfer Protocol and Uniform Resource Identifier standards, and from their subsequent deployment in Web client and server software.



The guideline that “form follows function” comes from hundreds of years of experience with failed building projects, but is often ignored by software practitioners.

"The hyperbole of The Architects Sketch may seem ridiculous, but consider how often we see software projects begin with adoption of the latest fad in architectural design, and only later discover whether or not the system requirements call for such an architecture. Design-by-buzzword is a common occurrence."

(Aa-aa-aa-aa-meeeen, brutha!)

"This dissertation explores a junction on the frontiers of two research disciplines in computer science: software and networking. Software research has long been concerned with the categorization of software designs and the development of design methodologies, but has rarely been able to objectively evaluate the impact of various design choices on system behavior. Networking research, in contrast, is focused on the details of generic communication behavior between systems and improving the performance of particular communication techniques, often ignoring the fact that changing the interaction style of an application can have more impact on performance than the communication protocols used for that interaction."

"My work is motivated by the desire to understand and evaluate the architectural design of network-based application software through principled use of architectural constraints, thereby obtaining the functional, performance, and social properties desired of an architecture. When given a name, a coordinated set of architectural constraints becomes an architectural style."

"Over the past six years, the REST architectural style has been used to guide the design and development of the architecture for the modern Web, as presented in Chapter 6."

(so, I am not sure I understand: did the author come up with REST, or is he just documenting it?)

Chapter 1

"This raises an important distinction between software architecture and what is typically referred to as software structure: the former is an abstraction of the run-time behavior of a software system, whereas the latter is a property of the static software source code"

Chapter 2

"The primary distinction between network-based architectures and software architectures in general is that communication between components is restricted to message passing [6], or the equivalent of message passing if a more efficient mechanism can be selected at run-time based on the location of components [128]."

"Tanenbaum and van Renesse [127] make a distinction between distributed systems and network-based systems: a distributed system is one that looks to its users like an ordinary centralized system, but runs on multiple, independent CPUs. In contrast, network-based systems are those capable of operation across a network, but not necessarily in a fashion that is transparent to the user. In some cases it is desirable for the user to be aware of the difference between an action that requires a network request and one that is satisfiable on their local system, particularly when network usage implies an extra transaction cost [133]. This dissertation covers network-based systems by not limiting the candidate styles to those that preserve transparency for the user."

"An interesting observation about network-based applications is that the best application performance is obtained by not using the network. This essentially means that the most efficient architectural styles for a network-based application are those that can effectively minimize use of the network when it is possible to do so, through reuse of prior interactions (caching), reduction of the frequency of network interactions in relation to user actions (replicated data and disconnected operation), [...]"

"Scalability refers to the ability of the architecture to support large numbers of components, or interactions among components, within an active configuration." I do not think that's a good definition of scalability.

"Scalability can be improved by simplifying components, by distributing services across many components (decentralizing the interactions), and by controlling interactions and configurations as a result of monitoring."

(I think scalability is something better thought of as achieved or not achieved, rather than something sort of achieved and then improved upon.)

"Styles influence these factors by determining the location of application state, the extent of distribution, and the coupling between components."
(Interestingly enough, even though the author appears to have gotten the previous two
two sentences wrong, this conclusion appears to be correct.)

"Generality of connectors leads to middleware."

(I do not object to that, but I have no idea what the author is on to with it.)

"Modifiability is about the ease with which a change can be made to an application architecture. Modifiability can be further broken down into evolvability, extensibility, customizability, configurability, and reusability, as described below. A particular concern of network-based systems is dynamic modifiability [98], where the modification is made to a deployed application without stopping and restarting the entire system."

"the system must be prepared for gradual and fragmented change, where old and new implementations coexist, without preventing the new implementations from making use of their extended capabilities"

(I think this may be a fallacy, or at least only possible in trivial scenarios. I think that what is far more likely to happen is that the introduction of a new feature will be incompatible with keeping an old feature around in any way shape or form. Essentially, the only way for the old functionality to remain available will be by re-implementing the associated module so that it emulates the old functionality using the new functionality.  (Providing an "illusion" of the old functionality.) And then, should this completely rewritten reincarnation of the old implementation be allowed to keep the old version number?  In theory, if your testing is not just extremely robust but actually perfect, then yes.  In practice, no.)

"Evolvability represents the degree to which a component implementation can be changed without negatively impacting other components."

"Extensibility is defined as the ability to add functionality to a system. Dynamic extensibility implies that functionality can be added to a deployed system without impacting the rest of the system."

"Customizability refers to the ability to temporarily specialize the behavior of an architectural element, such that it can then perform an unusual service. A component is customizable if it can be extended by one client of that component’s services without adversely impacting other clients of that component. [...] Customizability is a property induced by the remote evaluation and code-on-demand styles"

"Configurability is related to both extensibility and reusability in that it refers to post-deployment modification of components, or configurations of components, such that they are capable of using a new service or data element type."

"Reusability is a property of an application architecture if its components, connectors, or data elements can be reused, without modification, in other applications. The primary mechanisms for inducing reusability within architectural styles is reduction of coupling (knowledge of identity) between components and constraining the generality of component interfaces."

(*Constraining* the generality of component interfaces?  Is that an error?  I thought that
the more general the interface, the more reusable the component.)

"Visibility [...] refers to the ability of a component to monitor or mediate the interaction between two other components."

"Software is portable if it can run in different environments."

"Reliability, within the perspective of application architectures, can be viewed as the degree to which an architecture is susceptible to failure at the system level in the presence of partial failures within components, connectors, or data. Styles can improve reliability by avoiding single points of failure, enabling redundancy, allowing monitoring, or reducing the scope of failure to a recoverable action."

Chapter 3

"The purpose of building software is not to create a specific topology of interactions or use a particular component type — it is to create a system that meets or exceeds the application needs. The architectural styles chosen for a system’s design must conform to those needs, not the other way around."

Chapter 4

"Working groups within the Internet Engineering Taskforce were formed to work on the Web’s three primary standards: URI, HTTP, and HTML. The charter of these groups was to define the subset of existing architectural communication that was commonly and consistently implemented in the early Web architecture, identify problems within that architecture, and then specify a set of standards to solve those problems." 

(yup, that pretty much sums it up: it began as chaos, and any attempts to put the chaos into order were post-hoc.)

"The next chapter introduces and elaborates the Representational State Transfer (REST) architectural style for distributed hypermedia systems, as it has been developed to represent the model for how the modern Web should work. REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems."

Chapter 5

"The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction."

"REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state."


continue on last paragraph of page 83


On scripting languages

Michael Belivanakis 2017

Note: this is a first draft. It will be heavily edited. It may contain statements that are inaccurate or just plain wrong. It may also contain language that is inappropriate. There are bound to be corrections after I receive some feedback.

Historically, the difference between scripting languages and "real" programming languages has been thought of as being the presence or absence of a compilation step. However, from time to time we have seen interpreters for compiled languages, and we have also seen compilers for languages that were thought of as scripting languages. Furthermore, some scripting engines today internally compile to bytecode, and some even to machine code, while many compiled languages are compiled to bytecode instead of machine code, and this bytecode is at times interpreted. So, compiled vs. interpreted does seem to be the real differentiating factor between real programming languages and scripting languages. Nonetheless, we can usually tell a scripting language when we see one. So, what is it that we see?

I would like to suggest that the actual differentiating factor between scripting languages and real programming languages is nothing but the presence or absence of strong typing. In other words, it boils down to presence or absence of semantic checking. The seemingly coincidental fact that strongly typed languages tend to be compiled, while weakly typed languages tend to be interpreted, can be explained in full as a consequence of the primary choice of strong vs. weak typing:
  • If a language is strongly typed then it may contain detectable semantic errors, so a compilation step is very useful because it will unfailingly locate all the semantic errors that would otherwise only be detected at runtime.
  • On the other hand, if a language is weakly typed then the need to parse all of the code in advance is lessened, because the only errors that such parsing could possibly reveal would be syntactic ones.
So, this leaves us with the following postulation:
Real Languages = Strongly Typed = Semantic Checking = Usually compiled.
Scripting Languages = Weakly Typed = No Semantic Checking = Usually interpreted.
And yet, people tend to like scripting languages, and tend to actually write lots of code in them, supposedly because they are "easier". This immediately brings to my mind the famous quote by dr. Edsger W. Dijkstra, taken from a different context, but equally applicable to the situation at hand:
[...] some people found error messages they couldn't ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.

Arguments I hear in favor of scripting languages

Argument: It is easy to write code in it; look, the "hello, world!" program is a one-liner.

Rebuttal: What this means is that this scripting language is a very good choice for writing the "hello, world!" program. The ease with which you may write "hello, world!" is no indication whatsoever about the ease with which a non-trivial system may be developed, tested, debugged, maintained, and extended. On the contrary, a scripting language which makes it possible for you to write "hello, world!" in a single line achieves this by introducing a few trade-offs; it offers built-in functionality without the need to explicitly import it, which in turn means that there are identifiers always in scope even when not needed; it does not require code to be placed in classes, which means that it is either not object-oriented, or it mixes paradigms; and it does not require code to be placed in functions, which means that either its syntax is trivial, or again, it mixes paradigms. The moment you write anything non-trivial, you will of course need to be able to import namespaces, and to put everything in classes and methods, so the fact that the language does not require them buys you close to nothing.

Argument: No, I mean it is really terse. There are many things besides "hello, world!" that I can write in one line.

Rebuttal: Sure, you can write them in one line. But can you read them? Terseness appears to be the modern trend, so as real programming languages keep evolving they are also receiving features that make more and more things possible in one line. Take lambdas and the fluent style of invocations for example. However, this is always at the expense of readability and debuggability. So, terseness is not the exclusive domain of scripting languages anymore, and to the extent that scripting languages fare better in this domain it is debatable whether it is an advantage or a disadvantage.

Argument: There are lots of libraries for it.

Rebuttal: Seriously? There are more libraries for your scripting language than there are for java?

Argument: I don't have to compile it; I just write my code and run it.

Rebuttal: I also just write my code and run it. When I hit the "launch" button, my IDE compiles my code in the blink of an eye and runs it. The difference between you and me is that if I have made any errors, I am told so before wasting my time running it. But what am I saying, being told that there are errors in your code probably counts as a disadvantage for you, right?

Argument: I am not worried about errors, because I use testing.

Rebuttal: Testing is an indispensable quality assurance mechanism for software, but it does not, in and by itself, guarantee correctness. It is too custom-made, too subjective, and too fragmentary. You can easily forget to test something, you can easily test the wrong thing, and you can easily test "around" a bug, accidentally creating tests that pretty much require the bug to be in place in order to pass. Despite these deficiencies, testing is still very important, but it is nothing more than a weapon in our arsenal against bugs. This arsenal includes another weapon, which is closer to the forefront of the battle against bugs than testing is, and it is comprehensive, generic, 100% objective, and definitive. This weapon is called strong typing. It is also nothing but just another weapon, but it has so far been considered as fundamental and indispensable. Alas, this hard won realization from times of yore seems to be lost in the modern generation of programmers, who think they are going to re-invent everything because they know better.

Argument: It has lots and lots of built-in features.

Rebuttal: Sure, and that's why scripting languages are not entirely useless. If the only thing that matters is to accomplish a certain highly self-contained goal of severely limited scope in as little time as possible, then please, by all means, do go ahead and use your favorite scripting language with its awesome built-in features. However, if the project is bound to take a life of its own, you are far better off investing a couple of minutes to create a project in a real programming language, and to include the external libraries that will give you the same functionality in that language. Built-in features do not only come with benefits; in contrast to libraries, they are much more difficult to evolve, because even a minute change in them may break existing code. Also, built-in features usually have to be supported forever, even after better alternatives have been invented, or after they simply go out of style, so over time scripting languages tend to gather unnecessary baggage.

Argument: But really, it is so much easier! Look here, in one statement I obtain a list and assign its elements to individual variables!

Rebuttal: That's great, I bet this has slashed your time to market by half. Seriously, my compiled language of choice has its own unique, arcane, hacky syntax quirks that I could, if I wanted to, claim that they make things so much easier for me. Some of them are not even that arcane. For example, instead of having to add comments within a method about each one of the typeless arguments that it accepts, explaining what the actual type of the argument is, so that the IDE can parse those comments and provide me with some rudimentary argument type documentation and checking, I get to simply declare the type of each argument together with the argument, as part of the syntax of the language! Imagine that!

Argument: It is trendy. It is hip.

No contest here. I can't argue with hipsters.

The lack of semantic checking

Lack of semantic checking means that errors can be made, which will not be caught at the earliest moment possible, which is during compilation, or better yet, during editing in any decent IDE. Therefore, lack of semantic checking means that errors can be made more easily, which in turn inescapably means that there will be a somewhat increased number of bugs that will go undetected until production. This, by itself, is enough to classify scripting languages as unsuitable for everything but the most trivial usage, and the debate should be over right there; we should not need to say anything more.

But here is more, for the sake of the exercise.

Lack of semantic checking means that your IDE cannot provide you with many useful features that you get with strongly typed languages. Specifically, you either have limited functionality, or you do not have at all, some or all of the following features:
  1. Context-sensitive auto-completion. Since any parameter to any function can be of any type, the IDE usually has no clue as to which of the variables in scope may be passed as a parameter to a function and which may not. Therefore, it cannot be smart about suggesting what to auto-complete, and it has to suggest either everything that is in scope, or nothing at all.
  2. Member Auto-completion. Since any variable can be of any type, the IDE usually has no clue as to what member fields and functions are exposed by any given variable. Therefore, it cannot suggest anything.
  3. Find all references. Since any variable can be of any type, the IDE usually has no clue as to where a given type is used, or if it is used at all. This in turn means that when you are looking for usages of some type you have to resort to text search, which is a sub-optimal solution. Text search requires constant fiddling with search options like whole word vs. any part of word, case sensitive vs. insensitive, current folder tree vs. whole project (if there is even such a notion,) etc. and despite all the fiddling, it still usually includes irrelevant synonyms in the search results. Furthermore, text search is only useful when you already know what you are looking for, and you explicitly set out to look for it in particular. Contrast this with strongly typed languages where the IDE knows at any given moment all the locations where every single one of your identifiers are used, keeps giving you visual clues about them, (including visual clues about identifiers that are unused,) and can very accurately list all references with a single click.
  4. Refactoring. When the IDE has no knowledge of the semantics of your code, it cannot perform any refactoring on it. IDEs that offer refactoring features on untyped languages are actually faking it; they should not be calling it refactoring, they should be calling it cunning search and replace. And needless to say, a) it is not always correct, and b) in the event that it will severely mess up your code, you will have no way of knowing until you run the code, because remember, there is no semantic checking.

The horrible syntax

Most scripting languages suffer from a severe case of capriciously arcane and miserably grotesque syntax. No, beauty is not in the eye of the beholder; if you think that the issue of PHP aesthetics is a subjective one, you should seek help from a qualified professional. The syntax of scripting languages tends to suffer either because their priorities are all wrong by design, or because they were hacked together in a weekend without too much thought, or simply due to plain incompetence on behalf of their creators.

Scripting languages that have their priorities wrong are, for example, all the shell scripting languages. Their priorities are wrong by design, because they aim to make strings (filenames) look and feel as if they are identifiers, so that you can type commands without having to enclose them in quotes, as if this convenience was the most important thing ever. Actually, it would have been absolutely fine to offer this convenience if all we ever wanted to do with these scripts was to list sequences of programs to execute, but the moment we need to use any actual programming constructs, what we have in our hands is a string escaping nightmare of epic proportions.

A scripting language that owes its bad syntax to being hastily hacked together is JavaScript. Brendan Eich, its creator, has admitted that the prototype of JavaScript was developed in 10 days, and that it was never meant for anything but short snipets. He is honest enough to speak of his own creation in derogatory terms, and to accept blame. Also, he is working on WebAssembly, aiming to replace JavaScript with something completely new and completely different.

A scripting language that owes its horrific syntax to lack of competence on behalf of its creator is PHP. Rasmus Lerdorf, its creator, is quoted on the Wikipedia article about PHP as saying "I don’t know how to stop it, there was never any intent to write a programming language […] I have absolutely no idea how to write a programming language, I just kept adding the next logical step on the way."

So, from the above it should be obvious that most scripting languages are little toy projects that were created by hackers who simply wanted to prove to themselves that they could actually build something like that, without intending them to be used outside their own workbench. The lack of semantic checking in scripting languages is usually not a conscious choice, but a consequence of the very limited effort that usually goes into creating them. The fact that some of them catch on and spread like wildfire simply shows how eager the industry is to adopt any contemptible piece of nonsense without any critical thinking whatsoever, as long as it helps solve some immediate problem at hand.

That little performance issue

Performance is nearly not an issue, mostly because scripting languages tend to be used in situations where performance is not required, while in the rare cases where performance is necessary, external libraries can be used. (And there are of course some odd cases where performance is of concern, and yet a scripting language is chosen, and they do in fact suffer horrendous performance consequences, take node.js for example.) This is important to state real quick before moving on, so as to be clear about it: on computationally expensive tasks, such as iterating over all color values of an image to manipulate each one of them, there is no way that a scripting language will perform anywhere close to java, just as there is no way that java will perform anywhere close to C++. Stop arguing about this.

What scripting languages are good for

Scripting languages are useful when embedded within more complex applications written in real programming languages, mainly as evaluators of user-supplied expressions, or, in the worst case, as executors of user-supplied code snippets.

Scripting languages are useful when shortening the development time from first opening the editor to the first run of the program is far more important than anything else. Under "anything else" we really include everything else: performance, understandability, maintainability, testability, everything, even correctness.

Scripting languages are useful when the program is so trivial, and its expected lifetime is so short, that it is hardly worth the effort of creating a new folder with a new project file in it. The corollary of this is that if it is worth creating a project for it, then it is worth using a real programming language.

Scripting languages are useful when the code to be written is so simple that bugs can be easily detected by simply skimming through the code. The corollary of this is that if the program is to be even slightly complex, it should be written in a real programming language. (Adding insult to injury, scripting languages tend to have such capricious write-only syntax that it is very hard to grasp what any given line of code does, let alone vouch for it being bug-free.)


So, you might ask, what about the hundreds of thousands of successful projects written in scripting languages?  Are they all junk?  Do they represent a massive waste of time?  And what about the hundreds of thousands of programmers all over the world who are making extensive use of scripting languages every day and are happy with them?  Are they all misguided?  Can't they see all these problems?  Are they all ensnared in a monstrous collective delusion?

Yep, that's exactly it.



From http://wiki.c2.com/?SeriousVersusScriptingLanguages

Scripting Languages emphasize quickly writing one-off programs
serious languages emphasize writing long-lived, maintainable, fast-running programs.
light-duty "gluing" of components and languages.

From https://danluu.com/empirical-pl/

“I think programmers who doubt that type systems help are basically the tech equivalent of an anti-vaxxer”
the effect isn’t quantifiable by a controlled experiment.
Misinformation people want to believe spreads faster than information people don’t want to believe.


Devoxx 2016 Belgium - Microservices Evolution: How to break your monolithic database by Edson Yanaga

My notes on Devoxx 2016 Belgium - Microservices Evolution: How to break your monolithic database by Edson Yanaga (I attended this conference)

Reduce maintenance window
Achieve zero downtime deployments
"Code is easy, state is hard"
Changes in a database schema from one version to another are called database migrations
Tools: Flyweight Liquibase
Migrations require back and forward compatibility
Baby steps = Smallest Possible Batch Size
Too many rows = Long Locks
Shard your updates (not updating the entire table in one go)

Renaming a column
    ALTER TABLE customers RENAME COLUMN wrong TO correct;
        ALTER TABLE customers ADD COLUMN correct VARCHAR(20);
   UPDATE customers SET correct = wrong WHERE id < 100;
   UPDATE customers SET correct = wrong WHERE id >= 100 AND id < 200;
   (later) ALTER TABLE customers DELETE COLUMN wrong;

Adding a column
ADD COLUMN, setting NULL/DEFAULT value/computed value
Next release: Use Column

Renaming / Changeing Type / Format of a Column:
Next version: ADD COLUMN, Copy data using small shards
Next release: Code reads from old column and writes to both
Next release: Code reads from new column and writes to both
Next release: Code reads and writes from new column
Next release: Delete old column

Deleting a column
    Next version: Stop using the column but keep updating the column
Next version: Delete the column

For migrating from a monolithic application with a monolithic database to many microservices with own database each:
    Using Event Sourcing
        tool: debezium.io
    You tell it which tables you want to monitor, and from then on it monitors them
and generates an event for each DDL/DML statement you issue.
The event is propagated to as many event consumers as you want.
So, microservices can receive these events and update their own databases.

"HTTP and REST are incredibly slow"

Devoxx US 2017, Knowledge is Power: Getting out of trouble by understanding Git by Steve Smith

My notes on Devoxx US 2017, Knowledge is Power: Getting out of trouble by understanding Git by Steve Smith

"If that doesn't fix it, git.txt contains the phone number of a friend of mine who understands git. Just wait through a few minutes of 'It's really pretty simple, just think of branches as...' and eventually you'll learn the commands that will fix everything."

GOTO 2016 - Microservices at Netflix Scale: Principles, Tradeoffs & Lessons Learned - R. Meshenberg

My notes on GOTO 2016 - Microservices at Netflix Scale: Principles, Tradeoffs & Lessons Learned - R. Meshenberg

They have a division making a layer of tools for other teams to build their stuff on top of it.

Exceptions for statelessness are persistence (of course) but also caching.

Destructive testing - Chaos monkey -> simian army - in production, all the time. (During office hours)

Their separation of concerns looks like a grid, not like a vertical or horizontal table.

They have open sourced many of their tools, we can find them at netflix.github.com

GOTO 2015 - Progress Toward an Engineering Discipline of Software - Mary Shaw

My notes on GOTO 2015 - Progress Toward an Engineering Discipline of Software - Mary Shaw


17:28 past the bridges and into software engineering

Software Engineering is all design. Production used to be printing the CDs, and nowadays it is hitting the "deploy" button.

"scaling the costs to the consequences" -- the point is not to minimize the cost, the point is to scale it to the consequences.  Risks must be taken, and if the potential gains are huge, then the risks can be correspondingly large.

GOTO 2015 - DDD & Microservices: At Last, Some Boundaries! - Eric Evans

My notes on GOTO 2015 - DDD & Microservices: At Last, Some Boundaries! - Eric Evans

Microservices and Netflix - what is the connection?

Isolated data stores

"A service is something that can consume messages and can produce messages"