Governance in the Web 2.0 world
Everything over http
Server-driven communication goes back to Netscape's Server Push in 1995, and lives on in pseudo-protocols such as "Comet" [what do you call something that looks like an embryonic protocol but lacks things like an RFC or other published spec?] today. But it's a complete mismatch with the HTTP Request-Response model, and implementing it on top of HTTP implies significant extra complexity over running a dedicated service on a different port.
A Comet application is a Heath-Robinson construction to drive a non-HTTP network application over HTTP [I expect an advocate for Comet could give us excellent reasons as to why that's a foul calumny]. In a sensible world it would run over its own port, independent of the HTTP server. But security policies stand in the way of that. So the world routes around the firewall using Comet instead. And in doing so introduces more complexity, more scope for bugs and security vulnerabilities.
This is a bad thing. And there's a whole culture of it: the demand is such that we're getting generic tools and a name. How long will it be before there are off-the-shelf applications that only support Comet, so that even a company with a pragmatic and informed firewall policy is driven to use it? Clientside support is assured too. From a browser point of view it's just another potentially-useful capability in an AJAX world.
So why should one choose Comet over not only a rational open-another-port strategy, but also over transparent tunneling with the
HTTP CONNECT method? I don't know the answer, but
CONNECT is widely feared because everyone understands that it breaches the firewall, and indeed shifts the responsibility from the firewall layer to the server. Perhaps people need something less transparent?
Arguably yet more bizarre are the clutch of XML-over-HTTP protocols in and around "web services". The complexity here lies in the XML layer of packaging rather than the HTTP layer as such, but the underlying reason looks much the same - no one wants to open their firewall to RPC, so they use XMLRPC instead (or, more usually, more complex, and highly-developed WS-protocols).
In the case of webservices, wrapping in XML and routing through HTTP does serve a useful purpose. In the case of RPC, it's a much higher and more complex risk than merely opening a port on the firewall, so the problem it deals with is not just one of policy. The webserver itself can become an application firewall (e.g. with mod_security), and can become part of the application (e.g. with mod_publisher). Or both. Additionally, it can be used to enforce things like access control policies and bandwidth management. In short, the application gets the benefit of Apache's modular framework. Or whatever benefits another server may offer.
Still, the bottom line is that when a traditional path gets closed, the world will route around it. On balance, it's hard to call this a good thing or a bad thing, it's just inevitable.
But there's a critically important caveat - don't fall into a false sense of security. Any vulnerabilities in your application won't go away just because it's tunneled over HTTP!
And if your local Cassandra says you should open a new port in the firewall rather than tunnel over HTTP, perhaps you really should listen. If you want, you can even keep Apache at the server end of your non-HTTP application, using a custom protocol module. ®