2021-12-04

What is wrong with Full Stack Development

Inntel Hotel at Amsterdam, Zaandam

Table of Contents

  • What is full-stack development
  • Why is full-stack development necessary today
  • What is wrong with full-stack development
  • Conclusion

What is full-stack development


The predominant web application development model today requires splitting application logic in two parts:
  • The front-end, running on the browser.
  • The back-end, running on the server.
The front-end is typically written in JavaScript, while the back-end is typically written in Java, Scala, C#, or some other programming language. The two ends invariably communicate with each other via REST. The choice of JavaScript and REST is not due to any technical merit inherent in these technologies, (there is none,) but purely due to historical accident; see michael.gr - The Wild, Wild Web.

A web application developer can either focus on one part of the stack, or work on both parts. Due to reasons that will be explained further down, more often than not, web developers are asked to work on both parts simultaneously. When this happens, it is known as full-stack development.

For the purposes of this paper, we will call full-stack development not just this mode of work, but also this architectural style as a whole: full-stack development is when application logic must be written both on the server and on the client.

Full-Stack Development is a paradox, since it suggests a way of work which is contrary to what common sense dictates. Common sense calls for specialists each working on their own area of specialization, so one would expect to see different developers focusing on different layers of the stack, and nobody ever attempting something as preposterous as working on all layers simultaneously. However, there is a technological hurdle which renders this necessary today.

Why is full-stack development necessary today


(Useful pre-reading: About these papers)

Normally, (outside of web application development,) in a system that consists of multiple layers, only one of the layers tends to be application-specific, while all other layers tend to be general purpose infrastructure layers that are agnostic of any application that might put them to use. Under such an arrangement, the functionality offered by each layer is dictated by what makes sense for that layer to be doing, so the work to be done at each layer tends to be rather self-contained and straightforward. In this scenario, each specialist can indeed work on the layer that they specialize in. 

However, in web development we have a server, and we have a client, and so far we have been unable to find a solution that would allow us to confine all of our application logic to only one of them. (There have been some attempts in that direction, but they were only moderately successful, and virtually none of them survived the transition from monolithic architectures to microservices architectures.) As a result, in modern web applications, both layers are application-specific.

In the early days people did try to apply specialization and division of labor to web application development, and they found that when all the layers are application-specific, collaboration between teams working on different layers suffers, resulting in low productivity. There are too many details that have to be agreed upon by people working on different layers; too much waiting for the guys working on the layer below to finish their part before the guys working on layer above can do their job; too much disagreement as to whose fault it is when the system is not working as expected; in general, too much back and forth, too much friction.

For this reason, full-stack development was invented: instead of dividing the workforce horizontally, it ends up being less inefficient to divide them vertically: when each developer works on a different feature of the product from top to bottom, they do not have to interact too intensively with other developers, and this represents a gain which seems to offset the loss of not having specialists working on their respective areas of specialization.

What is wrong with full-stack development


In brief, full-stack development has the following disadvantages:
  • The front-end:
    • Has limited capabilities.
      • Is confined within the sand-boxed execution environment of the browser. 
      • Admittedly, browsers today are pretty feature-rich, (actually, monstrously so,) but still, you are writing code which is running out there, on browsers, and is therefore out of your control, instead of here, on the server, where you do have control.
      • So, there are always things that you would like to accomplish, but you cannot on the client, so you have to suffer the additional bureaucracy of having the client communicate what you are trying to accomplish to the server, having the server do it for you, and receiving the results back on the client. That’s an awful lot of work for something as simple as, say, obtaining the current date and time regardless of client configuration or misconfiguration.
    • Suffers from incidental complexity.
      • Peculiarities of the browser environment such as URLs, HTML, the DOM, HTTP, REST, Ajax, etc.
      • Cross-browser incompatibilities and cross-browser-version incompatibilities.
      • Security hazards.
        • Code on the client must not only accomplish application goals, but it must do so while avoiding various commonly known and not-so-commonly known security pitfalls.
        • Each time a new security hazard is discovered by the security community,  vast amounts of application code must be meticulously audited and painstakingly fixed.
    • Must be re-written on each targeted format (web, mobile, desktop.)
      • When targeting a new format besides the web (e.g. desktop, mobile) we have to re-engineer not only the presentation markup, but also all of the application logic which is inextricably mangled with it.
      • This necessitates the creation and maintenance of multiple separate code bases that largely duplicate the functionality of each other.
      • These code bases are liable to diverge, thus causing user workflows and overall user experience to unwantedly differ across formats.
    • Is usually written in a scripting language.
      • The code is error-prone due to scripting languages being untyped.
      • The code is hard to maintain due to untyped languages being impervious to refactoring.
      • The code is messy due to scripting languages invariably being inferior to real programming languages.
      • The code is transmitted in source code form to the browser, thus exposing potentially sensitive intellectual property.
    • Is usually written in JavaScript in particular.
      • JavaScript was originally intended for no more than a few, tiny, and isolated snippets of code per HTML page. The haphazardness of the language design reflects this intention. However, modern web applications tend to contain tens of thousands of lines of application-specific JavaScript. That is an awful lot of code in a language which is defective by design.
    • Excludes artists.
      • Artists are prevented from actively participating in the creation and maintenance of web pages, because HTML is inextricably mangled with JavaScript, so they cannot touch it. 
      • Thus, artists are resigned to creating mock-ups showing how they want web pages to look like, and programmers are then tasked with making the web pages look like the mockups. (As if the programmers did not already have enough in their hands.)
  • The back-end:
    • Is inextricably tied to REST
      • This is because REST is impervious to abstraction.
      • REST forces reliance on binding-by-name, which undermines the coherence of the entire system and prevents static code analysis, invariably resulting in a big unknown chaos.
    • Duplicates part of the client-side application logic.
      • This is necessary in order to perform validation on the server-side too, because from a security standpoint the client must always be considered compromised. 
      • This translates to additional development and maintenance cost.
      • Inevitable discrepancies between the validation done on the client and the validation done on the server is a continuous source of bugs.
  • The application as a whole:
    • Is split in two parts.
      • Usually having each part written in a different programming language.
      • Having The Internet interjected between the two parts.
      • Having the point of split dictated not by business considerations, but by technological limitations instead.
    • Mixes application with presentation.
      • A fundamental principle of graphical user interface application development is that application logic should be kept completely separate from presentation logic.  This principle warns against inadvertently allowing application logic to bleed into the presentation layer; however, with full-stack development we have application logic not just bleeding to the presentation layer, but actually embarking on a massive deliberate large-scale exodus to the presentation layer.
      • One might naively think that full-stack development accomplishes separation by keeping application logic on the server and presentation logic on the client, but this is demonstrably not so:
        • The server is largely reduced to a bunch of dumb REST endpoints that perform not much more than Create, Read, Update, Delete, List (CRUDL) operations with validation. That is not application logic; that's mostly just querying and updating the data store.
        • The client not only decides how things should look, but it also decides what options should be available to the user at any moment, and what new options will become available to the user as a result of user actions. Essentially, all application workflows are implemented on the client. That's application logic par excellence.
    • Is hard to test.
      • The front-end is not functional without the back-end, so the two ends usually have to be tested in integration, necessitating such monstrosities as Selenium.
    • Prevents specialization and division of labor.
      • Full-stack development necessitates The Full-stack Developer, who is:
        • a front-end programmer,
        • a back-end programmer, 
        • a network programmer, 
        • a security expert, 
        • a user experience expert,
        • an accessibility expert, and
        • a graphic artist
      … all rolled into one, thus running the risk of being a jack of all trades, master of none

Conclusion


By its nature, web application development requires systems that consist of multiple layers; the current state of affairs is such that application-specific code must be running on each of these layers, and this is called full-stack development. However, as I have shown, full-stack development has a list of disadvantages which is rather extensive, and each of these disadvantages is rather severe.

Essentially, we are suffering the consequences of a technological limitation: we currently have no means of confining all application logic to the server, so we have to be placing application logic on the client too, so we have no option but to be engaging in full-stack development.

Technological limitations require technological solutions, but companies with commercial goals do not usually take it upon themselves to solve the world's technological problems. Instead, they tend to make do with the existing problems, providing non-technological work-arounds to them, such as throwing more manpower into the development effort. This might make sense for each individual company, but from a global perspective, we have collectively been too busy mopping the floor to turn off the faucet.

A solution that would confine all application logic to the server and thus eliminate full-stack development has the potential of being very beneficial to the industry as a whole.

No comments:

Post a Comment