This is a commentary that is long overdue. There are literally hundreds of different template languages, languages that transform some kind of input into usually a text-based output. XSL is one of them, Razor is another. Razor was introduced in 2010 as the successor for ASP.NET templates. Additionally, a whole new toolset was released by Microsoft, namely MVC and EF. MVC was long overdue, seeing that writing file-based web applications is very out-of-date, and EF was what I would call YAORM, Yet Another Object Relational Mapping, and a very bad one.
As I grew up with XML, at a time where it was hyped to a point where it got a negative stigmata, because it was the hammer that transformed everything into a nail, even things not remotely looking like a nail, a whole environment of tools to work with XML was built. Every major programming language had an XML parser, an XSL transformer, extended Unicode support, which is necessary to support XML, XML databases and last-but-not-least XHTML. Everything suddenly became XML. People got confused because XML was so powerful. Nobody understood namespaces and their use cases, nobody understood XSL(T), nobody understood SAX, although those tools were fast, powerful and very versatile.
This seemed to induce a change of mind with people not being able to cope with XML and all the tools available to them. New, useless and very limited template languages popped up, rapid development frameworks and new text-representations of object-oriented data structures, like YAML and JSON, the latter one being the cancer of Web 2.0.
Let's pick on JSON, because it is an easy target:
- It's generally less expressive, because it doesn't have a distinction between attributes and elements.
- It doesn't have namespaces, so an attribute named "version" could mean anything and nothing, and you cannot intermix two documents because of the potential collisions between same-named attributes.
- You can only specify the charset at the transport level, which means sending the correct charset as an HTTP header. An on-disk file has no inherent charset. You can read literally hundreds of forum posts of people who have problems getting their JSON server responses interpreted in the wrong charset.
- There is no way to validate a JSON document at the syntax or content level, because it's usually parsed with "eval", which is very liberal with parsing, and there is no schema language like DTD or XSD. It is possible for an interrupted JSON response to be seen as OK. It is also possible for one JSON parser to accept a document, and another rejecting it, because in reality, it never was valid.
- In my opinion, it might be more compact, but at the same time harder to read for a human. If reading by humans hadn't been a goal, a binary encoding would have been better suited, because it is smaller and better defined. One established example is ASN.1.
At this point, we can look at XML and HTML and XSL and XForms, and see how all those pieces could have fit together. The server providing a full HTML page without additional AJAX requests, realtime section updates without writing additional code, because the browser could have reused the XSL code used on the server side, a real MVC implementation that doesn't need some funky transition layer between the unstructured form submit and the structured model data, and many things more.
ASP.NET MVC and EF
Now that we are stuck with this shitload of legacy web programming stuff, Microsoft tried to help us out with their two new frameworks. MVC translates requests into server method calls, without a gigabyte sized viewstate hauled through each page, and without trying to pretend that the server and the client are the same, and EF allows us to translate objects into database rows. If you, at this point, are thinking about using EF: don't. Use one of the existing-since-the-time-of-dawn ORMs, if you must use an ORM at all, like NHibernate, because EF is so bad, it's not even able to map a 1:n ordered list. They have nice examples for download, but if you need an ordered list, you're screwed, and you usually will find out about this deficiency when you are half way down developing your application. So, just don't use it.
For the MVC part: it has custom routing, it has a decent transformation layer between form data and your objects, it's extendable, so it's basically usable. When you download the package, you'll get some example applications and some standard controllers to tinker with, and that's when you meet Razor, the almighty, bestest of the bestest, newest of the newest template languages. It sucks. It sucks so hard, one has problems to describe it. Even the CRUD samples provided with the framework need several templates per controller, all basically duplicating the whole data structure, each a bit different to accommodate the differences between creation, modification, viewing and deletion. If you add or remove a single field in your object, you need to update each and every template. I don't even know why it is called a template. Razor has so many problems, we need a list:
- It is not modular, at least not beyond the file level. You can define master/parent templates, but then again, good ol' ASP.NET could do the same thing, even better, with ContentPlaceHolder.
- You can define sections, you can define if they are optional or mandatory, and each template in the chain can fill in, but only one template can define the content of a section. This might be hard to understand, but the basic concept of object orientation is overwriting what a parent class does in a more specialized manner, and Razor does not allow this. A child template cannot override an already defined section. The parent needs to accommodate for this by having an if-else-clause so that templates that do fill in a section and templates that don't won't produce errors.
- Razor uses some kind of heuristics slash incomprehensible syntax to distinguish between C# code and HTML, which does more often than not interfere with what you want to do. I hope you like the @-symbol, because you will use it a lot.
- It uses C#/.NET code as it's main elements, which means you have bound yourself to never use anything else besides .NET for your application. If you want to switch, your templates are useless. It also means that the view (that's the V in MVC) queries beyond the model (that's the M in MVC). This was one of the mayor problems that lead to the death of Umbraco v5 and especially to the abysmal performance there even for simple sites. It also breaks the M-V-C-distiction, which makes this pattern so attractive. With Razor, you can pull in any .NET namespace and every class, and use any of them to do anything you want, thus creating a controller-view-hybrid that will never work outside your application and is also hard to unit test.
- Razor heavily relies on .NET classes to render form elements. The system is pluggable and uses class metadata, which is good. But it also means that you are moving more and more code that renders HTML back into your C# codebase, away from the templating system, and you are using reflection, which has bad performance. The lack of modularity makes this necessary. Each element suddenly becomes a user control, known from the ASP.NET world.
- Razor again doesn't know anything about charsets, schemas and validation.
XSLT to the rescue
XSL(T) addresses several of those problems. First of all, XSLT is a touring-complete functional language by itself, which means you usually don't need any additional help like in Razor through external languages. It also makes the language hard to learn. On the other hand, it is very modular. You can define a semantic template, which, depending on which mode you are using to call it, can transform in very different ways. For example, a single template and some imports allow you to define a data structure in one place, and have the different types of output necessary for CRUD transformed depending on how you access the template. It also gets all the benefits of the XML infrastructure.
XSLT is really, really fast. Razor templates get compiled, but so do XSLT stylesheets, and they can transform huge datasets in milliseconds. Because of the compilation, there are some limitations, but none that are a problem. XSLT is modular, so you have several methods that decide which template will transform what node, which can be an XML element, an attribute, a text node or a comment:
- xsl:apply-templates can use a select statement with multiple conditions and a mode selector, to select which nodes should be processed, and what mode should be used.
- each xsl:template can have its own match statements, which again narrow on what nodes get processed, and different modes can be used, together with numeric priorities
- on top of that, .xsl-files can be imported, and selectively used with xsl:apply-imports
What's wrong with XSLT?
Now that XSLT is the best template language in the world, why do people despise it so much? Again you get a list of problems:
- Steep learning curve. It's no joke. The whole concept is hard to grasp, and took me a few years. It's probably one of the main reasons people think the W3C only produces bullshit. I have a different opinion about it, because XHTML could have been the next HTML5, SVG could have been the next Flash, XML could have been the semantic web, and XSLT could have been the one and only portable template language. Still, learning XSL is a mayor PITA. I urge everyone to use XSLT, but at the beginning, when you are not able to grasp the functional language components, it feels like it is trying to interfere with what you want to archive. Only later will you begin to realize how everything fits together and makes most of the tasks trivial.
- XML and XSL are verbose. That is true. Personally, I write a few more characters and in return get a fast, modular templating engine. RoR shows that some people think otherwise.
- As the focuses have shifted to JSON and other technologies, some new development has been canceled. For instance, .NET will not include an XSLT 2.0 compliant transformer. There are community projects, but then again, seeing Microsoft not supporting this W3C standard is a statement by itself. What a pitty.
- The main problem: all input for XSL transformations has to be XML. This has one advantage, because you cannot cheat on the MVC pattern. Your model is the XML input, and you cannot query your way around it. Depending on the framework you are using, creating XML from your data can be very easy, e.g. with .NET and XmlSerialization, or be a mayor PITA if you have to do it manually. There are ways to cheat out of that problem, e.g. by implementing some kind of XPathNavigator as an input for your transformation, that directly queries into your data model, but then again, you're doing what you are not supposed to do: have a model and thus a view that very much depends on your application. Luckily, there are frameworks for transforming any kind of data into XML in basically any language, and because XSLT is so fast, there is no problem with overhead on the input side.
As XHTML has been canceled, XForms has an uncertain future. There is XSLTForms, which allows all mayor browsers to use XForms. You can still use XSLT to create the HTML which your server delivers, and all browsers still have XSLT parsers, so you can reuse the same XSL stylesheets to transform AJAX-retrieved XML content into HTML. Personally, I don't think these technologies will fade away pretty soon. When all the hype about HTML5 is over, people will realize there really isn't anything new that jQuery didn't already provide, and then notice that much more and much faster advancement is required. CSS is still shit, and with all the legacy concerns, will stay a while this way. HTML forms work pretty much the same as when they were introduced in the '90s, and are a mayor PITA to parse.
Hopefully, some people besides the uninformed "web designers" will wake up and demand more advancement in interactivity, without loading 500kB jQuery code and lots and lots of late night debugging sessions because nothing was really defined or standardized.