From the beginning, the .NET stack had first class support for unmanaged libraries. By using P/Invoke one can access most of the Win32 API and support for COM opens up developers to a wealth of applications and third-party libraries. With recent advancements in the Dynamic Language Runtime, scripts written in Python, Ruby, and JavaScript also come into play.
But are .NET developers actually taking advantage of this? Clint Hill says no.
The work was great and the tools were always getting improved. .NET came out and it was exciting. But after 6 years or so, it was wearing thin. The culture started to get a little crazy. One tenet of this culture (I use tenet loosely) was that all components used had to be derived of .NET materials. Which is to say: If you need a web controls library it has to be a .NET C# library because that is what you’re building your project with. And if that isn’t available, at a cost or free, then build your own.
There are few problems with this, the least of which is that projects got slow and deadlines got pushed. The biggest problem however is the idea that developers began to believe that other technologies were not acceptable. Regardless of how well they solved the problem. Today I find this mentality so disappointing.
Clint made these statements in response to a blog post by the popular author Jeff Atwood. Jeff, along with the equally well-known Joel Spolsky, is currently working on a forum site known as Stack Overflow. When it comes to sanitizing HTML for their site, Jeff turned up his nose at every pre-existing library. His reason, because they weren't written in .NET.
Do I regret spending a solid week building a set of HTML sanitization functions for Stack Overflow? Not even a little. There are plenty of sanitization solutions outside the .NET ecosystem, but precious few for C# or VB.NET. I've contributed the core code back to the community, so future .NET adventurers can use our code as a guidepost (or warning sign, depending on your perspective) on their own journey. They can learn from the simple, proven routine we wrote and continue to use on Stack Overflow every day.
Dare Obasanjo explains why this usually isn't a good idea,
The problem Jeff was trying to solve is how to allow a subset of HTML tags while stripping out all other HTML so as to prevent cross site scripting (XSS) attacks. The problem with Jeff's approach which was pointed out in the comments by many people including Simon Willison is that using regexes to filter HTML input in this way assumes that you will get fairly well-formed HTML. The problem with that approach which many developers have found out the hard way is that you also have to worry about malformed HTML due to the liberal HTML parsing policies of many modern Web browsers. Thus to use this approach you have to pretty much reverse engineer every HTML parsing quirk of common browsers if you don't want to end up storing HTML which looks safe but actually contains an exploit. Thus to utilize this approach Jeff really should have been looking at using a full fledged HTML parser such as SgmlReader or Beautiful Soup instead of regular expressions.
This debate is not just about sanitizing HTML, it goes to the core of the .NET culture. For.NET developers, is it appropriate to use non-.NET libraries in your day to day work?