XML machine the successor to von Neumann?
Really bring data and programs together
The most basic principle of a von Neumann machine is that programs and data can share memory as they are both just strings of bits. This is still the basis of the architecture of all commercial computers, writes Peter Abraham of Bloor Research.
These two concepts came together at the dawn of computer history but have tended to drift apart ever since. The COBOL programming language doesn't look anything like data. Object-orientation brought process and data closer together but even then the storage of the two was totally different.
XML goes back to von Neumann because data and programs can both be stored in XML. In a sense XML goes further by storing input and presentation in the same format as well. To take just three examples, ebXML is data, BPEL4WS is program and XFORMS is presentation.
This is philosophically and academically interesting but is it of any practical importance?
The simple answer is yes, because if you can develop an XML machine that can process XML data based on XML programs you have a higher level machine than a von Neumann machine. The practical effect of this is the ability to develop new applications with less code.
Is this feasible? A small UK company called hyfinity have a patent pending on the kernel for such a machine which they call a Morphyc architecture. Think of it is as the equivalent of the control process in a von Neumann machine, bringing together input/output, memory and the arithmetic and logic unit. The arithmetic and logic unit is bought-in in the form of commodity processes, including XML parsers, XPATH and XSLT processors. The kernel itself is written in Java and only needs a JVM to run.
Based on this kernel, hyfinity have developed two products, MVC and PxP. MVC is an extension of XFORMS which enables fast development of browser-based applications. They have used this to develop xStudio which is the development environment for both products. Once they had developed the basic functions of MVC, they used it to develop its own extensions. This then gave them the base to build the xStudio functions needed for PxP.
PxP is a peer-to-peer integration package which allows the integration and in some cases the development of applications that receive, process and produce XML. This recursive use of products to develop themselves has positive implications: the kernel is small and is very well tested, and at the next level up, the developers have used their own product, so they make sure it is user friendly.
The concentration on XML, to the exclusion of all else, makes the architecture of the product very clean and enables a great deal of functionality to be built on a small kernel. Any connections to non XML messages or data will be through a third-party adapter.
The proof that this is an interesting idea is in the fact that a company with only eight full-time employees have built a product that functionally is in the same league as many of the much bigger and more established players. If they can develop such a level of functionality using their own product then that product should be capable of developing functionally rich applications for their users. Several clients are now live with industrial strength applications running.
The other fact that comes from the size of the development team is that it's price point can be much lower and can be made attractive for highly distributed systems.
The obvious downside of such a small company is the question of whether they can survive in this very competitive market place.
If there's any justice, they will do so. ®
Copyright © 2003, IT-Analysis.com
Sponsored: Network DDoS protection