California's state election commission is likely to throw out Diebold's current electronic voting machine, obliging counties to replace them before the November elections. The ATM giant faced a savaging at hearings yesterday, and the company may face civil and criminal proceedings as a consequence of using uncertified software. More importantly, Diebold acknowledges that earlier security problems haven't been fixed yet. The Oakland Tribune [a must-read] discloses memos between Diebold and its law firm Jones Day in which company executives acknowledge that they were breaking State regulations by running uncertified software. However this contrasts with a letter to regulators sent in February in which Diebold vowed that it hadn't made any hardware modifications in the past five years. In fact, software updates were being sent to the voting machines only two weeks before this March's Super Tuesday elections. The Tribune cites a former Diebold employee who confirms scenes of chaos within the company as March's elections were held. The terminals had failure rates of 24 per cent in Alameda County and 40 per cent in San Diego county. Incredibly, tests were only performed on ten to fifteen per cent of machines before they left the factory. Diebold president Bob Urosevich admitted that thousands of voters had been disenfranchised. But it isn't over. Diebold has instructed Jones Day to challenge California's procedures on technical grounds, which would prevent the state from throwing out the voting machines. An advisory panel voted 8-0 this week that the state should. Officials may be accused of complacency. Last November Kevin Shelley, head of the states elections office, mandated that voting machines leave an auditable paper trail: but not until July 2006. ® Related stories Judge OKs California e-voting Gouging memo leaves Diebold red-faced Nachi worm infected Diebold ATMs California mandates e-voting paper trails Electronic Voting Debacle E-voting vendor sued for DMCA takedown Fraud potential found in e-voting systems
In an irony that won't escape long-time Google watchers, an overlooked regulation will oblige the company to disclose more information than it wanted to. Google has long fought shy of an IPO offering, as remaining private allows the company to "avoid public scrutiny", in the words of co-founder Sergey Brin. Comparably sized companies have happily stayed private and out of the limelight, such as one of the world's largest, the SAS Institute, which turns over a $1 billion revenue a year. However, section 12(g) of the 1934 Securities and Exchange Act, brought about to temper the excesses of 1920s speculation mania by introducing greater transparency, obliges even private companies of a certain size - 500 shareholders and $10 million in revenue - to make more comprehensive disclosures. "The filing with the Securities and Exchange Commission, would reveal so much about the secretive firm that many experts believe Google might take the next logical step and file for an initial stock offering," the San Jose Mercury suggested this week, and wire services reaffirm that this requirement has tipped Google inexorably towards IPO. Although press comment has focused on the similarities between the Google IPO mania and the excitement that followed Netscape's flotation in 1995 - indeed, there are many here who hope it results in the same speculative bubble - the parallels aren't exact. When Jim Clark took Netscape public, he was making a gamble. Netscape was a few months old, had little discernable income, little of what marketers call "brand equity", and certainly no secret sauce. By floating the company, Clark was propelling it into the big league. It was a sleight of hand, that could only last until the money ran out, or Microsoft woke up. Google doesn't have a secret sauce either (as Daniel Brandt notes in his acidic, but accurate, summary here). On the other hand it is a world-known name, and more importantly, a profitable business as an advertising broker. Brin is correct: Google simply doesn't need to float. His official explanation is that a public Google would be distracted by the quarterly reporting requirements, and so we must take him at his word. But Google sounds pretty distracted already: it neglected to do due diligence on its new Gmail service (a small British company owns the trademark in 80 countries and there are already two Gmails out there) and Bill Joy turned down a company in such chaos that "no one can figure out who's in charge or even what Google's licensing policy is," Forbes reported last year. However the manner of its path to IPO will set the mood music for the company's foreseeable future. Even if it does its job well, it isn't likely to escape the attention of regulators. In fact, the better it does, the more attention it will attract. This week saw the first introduction of a bill to require Google to provide an opt-out for users of its Gmail service. Hard-headed regulators simply don't go gaga at the sight of men and machines as starry-eyed techno-utopians. Marc Andressen said some pretty reckless things a decade ago, but he never dreamed of planting his company's chip in your head. ® Related stories Bill Joy spurned job at out of control Google Google values its own privacy. How does it value yours? State senator drafts Google opt-out Bill Google founder dreams of Google implant in your brain Feds slap cuffs on Google stock scammer Google launches email, takes the Bill Gates defense Why Microsoft could be Google's best bet Google heals the sick Google buys search engine PageRank RIP?
The World Wide Web Consortium (W3C) has announced final approval of two key technologies for the Semantic Web. These are the revised Resource Description Framework (RDF) and the Web Ontology Language (OWL). These are important components for the Semantic Web that will make it possible for the Web to respond more intelligently to people's information needs. The powerful ideas behind the Semantic Web are highly relevant to enterprise's internal needs and can be expected to cross over as have Instant Messaging and Blogging, to name just a couple of Internet technologies that are changing the enterprise's approach to information handling. The Semantic Web uses three key technology standards: XML to provide the rules and syntax to define structured documents, RDF to define a vocabulary framework for describing document properties and OWL to define an area of knowledge. Today, XML is the most mature and widely-used of these standards. It provides a set of rules to define and communicate the structure of both documents and data. However although XML supplies a powerful, flexible syntax to define structured documents it does not define their meaning. The next standard, RDF, provides a set of rules for describing the semantics or meaning of documents. Just as XML provides a way to describe the structure of a document, RDF provides the means to define its characteristics or properties. The RDF syntax links Resources and Properties into Statements: A Resource is anything that can have a URI (uniform resource identifier); this includes Web pages as well as individual elements of an XML document. A Property is a Resource that has a name and is used to indicate a property of a document such as its Author or Title. A Statement combines a Resource, a Property, and a value, for example, "The Author of article id #3756" is Martin Langham." An RDF Schema combines these descriptions into sets of Vocabularies which would be used for a similar group of documents such as a library catalogue, a repository of HR documents, engineering drawings and so on. Independent parties will be able to exchange vocabularies and use the same Property descriptions. RDF based vocabularies will become important commercial assets and we can expect that certain vocabularies will become dominant in an application area as each information user starts to use the same language. RDF is not sufficient to support the requirements of the Semantic Web by itself. The relationships between a collection of documents needs to be described to enable computers to provide various kinds of reasoning services about the domain and the knowledge described. For this task an ontology is needed that describes and represents an area of knowledge. Ontology is a term borrowed from philosophy that refers to the science of describing the kinds of entities in the world and how they are related. An ontology language is used to formally describe a knowledge domain including: what individuals and classes of individuals there are in that domain and the relationships between these individuals and classes. Ontology languages are used by people and applications that need to process subject-specific information - for example in finance, education, construction, etc. OWL the Web Ontology Language recently released by the W3C provides a language for defining Web-based ontologies. OWL defines classes of documents and their properties and the constraints on the way those classes and properties can be related. OWL builds additional syntax on top of RDF and RDF Schemas for describing Properties and the relations between Classes. Now that the three standards for defining the Semantic Web are available, a lot of work will be needed to create the RDF schemas and ontologies to make the Semantic Web a reality. These technology standards may appear arcane to enterprise information users but they have the same potential as Web services to revolutionise the way that we can use information to get the job done. © IT-Analysis.com Related stories IBM throws weight behind BPEL FSF eases Microsoft schema patent fears W3C OWL proposals take flight
Critics took aim this week at a controversial international treaty intended to facilitate cross-boarder computer crime probes, arguing that it would oblige the US and other signatories to cooperate with repressive regimes - a charge that the Justice Department denied. The US is one of 38 nations that have signed onto the Council of Europe's "Convention on Cybercrime," but the US Senate has not yet ratified the measure. In a letter to the Senate last November, President Bush called the pact "the only multilateral treaty to address the problems of computer-related crime and electronic evidence gathering." The treaty, "would remove or minimize legal obstacles to international cooperation that delay or endanger U.S. investigations and prosecutions of computer-related crime," he said. Drafted under strong US influence, the treaty aims to harmonize computer crime laws around the world by obliging participating countries to outlaw computer intrusion, child pornography, commercial copyright infringement, and online fraud. Another portion of the treaty requires each country to pass laws that permit the government to search and seize email and computer records, perform Internet surveillance, and to order ISPs to preserve logs in connection with an investigation. A "mutual assistance" provision then obligates the county to use those tools to help out other signatory countries in cross-border investigations: France, for example, could request from the US the traffic logs for an anonymous Hushmail user suspected of violating French law. Dual criminality. Not That worries civil libertarians. The treaty is open to any country, with the approval of those that have already ratified it, and some fear that it could put the United States' surveillance capabilities at the disposal of foreign governments with poor human rights records, who may be investigating actions that are not considered crimes elsewhere. "There is no requirement that the act that is being investigated be a crime both in a nation that is asking for assistance, and the nation that is providing assistance," said the ACLU's Barry Steinhardt, speaking at the Computers Freedom and Privacy Conference in Berkeley, California on Thursday. The US and other countries will be asked to use the electronic snooping powers mandated by the treaty to track political dissidents, he said. Betty Shave, who heads the Justice Department's international computer crime division, admitted that the treaty mostly lacks so-called "duel criminality" provisions, but she countered that other language in the pact would prevent abuses. One clause in the treaty allows a country to refuse to cooperate in an investigation if its "essential interests" are threatened by the request: Shave says that would allow the US to bow out of a probe targeting free speech or other actions protected by the U.S. Constitution. Moreover, political offenses are specifically excluded from some types of mutual assistance requests available under the treaty. The treaty is necessary because "crime and terrorism, like everything else, are moving onto the Net and are increasingly difficult to investigate, and a lot of crime is international," said Shave. "Many crimes are deliberately staged through various countries just to make it difficult to investigate." Privacy International's Gus Hosein argued the international community should have produced model legislation to harmonize computer crime laws, instead of a treaty with mutual obligations. "You create a treaty, suddenly you have all these interests come in." Thirty-four European nations, plus Canada, Japan, South Africa and the United States have signed onto the treaty, but only five have thus-far ratified it: Albania, Croatia, Estonia, Hungary and Lithuania. If ratified, no new domestic laws would be have to be passed to bring the US into line with the treaty, according to the Justice Department. Steinhardt was skeptical. "The treaty is already being used as a pretext in some developing nation to pass some pretty draconian laws," he said. "I wouldn't be surprised to see it used in the US that way." Copyright © 2004, Related stories MPs hold inquiry into UK computer crime law US cybercrime push imperils personal security of Americans Security fears over UK 'snooper's charter' US assumes global cyber-police authority Int'l cybercrime treaty remains horrid
Yahoo! is calling on the EU to set up a uniform intellectual property rights regime. The Web portal wants to set up a music download service in Europe. But it says its efforts are hampered by the different licensing rules across the region. That's according to Martina King, managing director, country operations, Yahoo! Europe, who was attending an informal summit of EU ministers of communications in Dundalk, Ireland on Thursday (22 April). The meeting included a CEO Roundtable, where the heads of many of Europe's biggest ICT companies communicated their concerns about broadband rollout in the EU to the ministers. "I have to say I was very impressed," King told ElectricNews.Net. "The ministers were very knowledgeable and seemed genuinely concerned about broadband. It was a surprise, because I came here not expecting much. I just wanted to make sure that my concerns were heard." Chief among her worries is the lack of co-ordination in Europe on intellectual property rights policy. This non-uniformity makes it difficult to ink deals with music providers, she said. "As it is now, we have to negotiate [music download rights] in every country, and the terms are different in each. It is not like the US, which is a single market." Her concerns were also echoed in a report released in Dundalk by more than a dozen high-tech firms, including Alcatel, OD2, Telecom Italia, AP, AOL Europe and Cisco. The report, prepared by PricewaterhouseCoopers, was submitted to the ministers at the event. It called for stronger and more unified policies thoughout the 25-state bloc on issues such as spam, broadband rollout and uptake, and implementation of e-government services. It also said that EU governments need to put in place a common regulatory framework for digital services that "does not hinder the development of new business models and services in the broadband market." Another priority in the report - one that is close to King's heart - was a call for the EU communications ministers to develop a balanced and unified approach to protecting intellectual property rights. The group of companies behind the report said tsupport the recent and highly controversial Directive on Intellectual Property Enforcement, "because it aims to harmonise minimum IPR enforcement rules throughout the enlarged EU." But, the PwC-prepared report also noted that not everyone is on board. "Whereas service providers wish to be able to use digital content on reasonable terms, on a pan-European basis, content owners are opposed to government intervention in the process of commercial negotiation...There is therefore a real need to develop a consensus between content owners and service providers," said the report presented to the EU ministers. In other comments, King said Yahoo! would be "very interested" in launching a co-branded broadband product in Ireland, similar to the BT Yahoo" broadband product on offer in the UK. She could not confirm if Yahoo! was in talks with any Irish telcos to launch such a product. © ENN Related stories Napster's music licensing frustration Europe demands open-to-all DRM tech Apple iTunes Europe debut 'may be delayed' Aussie firms fight to take biggest loss for music downloads OD2 clocks up 1m downloads in Q1
A senior Qwest executive this week provided a grim picture of the telecommunications industry, saying all is still not well in the sector despite some recent optimistic chatter about IT spending coming back in a significant way. "The telecommunications market goes through cycles like everything else," said Wesley Kaplow, the CTO of Qwest's government services business at the IEEE International Symposium on Cluster Computing and the Grid held here. "Unfortunately, because the market is not what it should be or could be many companies are going away." Gloomy forecasts for the telecommunications market at certainly not new. The sector has been punished by a massive decline in spending as companies such as Lucent and MCI (WorldCom) appear only as shadows of their former selves. Over the past six months, however, gains seen in other sectors such as hardware, processors and software seemed to point to a solid rise in overall IT infrastructure spending. In addition, a surge in mobile device and mobile services sales opened a nice area of growth for telcos and equipment makers. But the most recent fiscal results from the sector largely quashed any hopes of a flash near-term recovery. Lucent posted a profit for the third straight quarter but did so on the back of cost-cutting measures. Sales of wireless and landline equipment fell. Similarly, AT&T Wireless lost 367,000 subscribers in the first quarter, and Nokia warned that sales of its handsets had dropped off. There are signs of hope, but overall, telecommunications companies appear set for single digit growth at best, according to analysts. A big problem for the infrastructure providers is that no company is willing to spend on next-generation technology, Kaplow said. "We are trying to maximize every megabit out of our systems without spending because we're broke," he said. "We're broke." "The industry itself is still in quite a flux, and nobody knows exactly what will happen. Whether we will be acquired or acquire is hard to say." This uncertainty coupled with still high costs for equipment has telecommunications companies locked in what Kaplow described as "stage two" development. The telcos are pushing equipment makers to lower the cost of their gear so that more bandwidth can be pulled out of current systems. Until costs come down, companies will be reluctant to move to "stage three" where vendors try out "revolutionary ideas" for networks. For its part, Qwest plans to make a major infrastructure buy in the near future. "It will be the largest network buy in the next two to three years," Kaplow said without providing more detail. Qwest is in the process of releasing proposals for the purchase to suppliers. ® Related stories Vonage goes to Canada Cable TV ruled comms medium Court backs telcos in US network wars AT&T ADSL thunders across US heartland Qwest to sell VoiP to Harry Homeowner