Another systematic SCADA vuln
CoDeSys environment gives World+Dog networked command-line root
If it’s Monday, it must be time for a new SCADA vulnerability: this time, arising through the combination of a popular development environment and bad developer habits.
Described in full by Digital Bond researcher Reid Wightman here, as many as 261 manufacturers and heaven-knows-how-many deployed systems may have created insecure systems using the software.
The software in question is CoDeSys, from German company S3. This provides a control system development environment, which writes finished code to a runtime engine. Because the runtime needs access to /dev (if the target system is Linux) and an output bus, Wightman says the runtime is often given root or (in the case of Windows-based targets) administrator access.
And that becomes a problem when the environment provides network access to the command line – in the case of CoDeSys, via a TCP listener that’s part of the executable binary.
“The TCP listener service allows for file transfer as well as a command-line interface,” the post states. “Neither the command-line interface nor the file transfer functionality requires authentication.
“The result of all of this is that a user with the right know-how can connect to the command-line of CoDeSys and execute commands, as well as transfer files. Commands include the ability to stop and start the running ladder logic, wipe PLC memory, and list files and directories. Transferring files include the ability to send and receive. Sending and receiving files also suffers from directory traversal — we can read and write files outside of the CoDeSys directory on the controller using “../” notation. On most operating systems this includes the ability to overwrite critical configuration files such as /etc/passwd and /etc/shadow on Linux, or the Windows registry on Windows CE.”
Apparently, the sole protection against malicious access built into the system is in its licensing system: the CoDeSys target system is only supposed to talk to its own PLC-Browser software. This, as Wightman has demonstrated (complete with code), is easily bypassed – meaning that any system visible to the Internet is vulnerable to attack. ®
Re: I have a theory
"Anyone who understands anything about safety-critical systems doesn't want to write them, becuase they are clearly hard to get right and the consequences of mistakes are potentially serious."
I have a different theory on similar lines.
There are people willing to do safety critical work. Some of them are knowledgeable and competent and even reliable and do actually want to Do The Right Thing.
But the PHBs in charge of the overall business these days don't want to hear that "doing it right" will cost money, and may require engineering competences and attitudes which are not readily available from the bargain basement Windows centric gene pool. And "whistleblowing" to the regulators (whose job it is to make sure that bad things don't reach places where they can do damage) doesn't pay the mortgage.
So, the skills and knowledge exist (or existed), but are dying and/or ignored.
Les Hatton, are you following this stuff? Ross Anderson, do your team know about anything outside the financial sector?
And Bad Networking Habits
...meaning that any system visible to the Internet is vulnerable to attack.
Which should be none. There is absolutely no reason in this day and age not to have SCADA systems behind a proper firewall with access limited solely to authorized remote sites or VPN clients.
Re: I have a theory
"Anyone who understands anything about safety-critical systems doesn't want to write them, because they are clearly hard to get right and the consequences of mistakes are potentially serious."
The problem is it has to pass three requirements:
1. It has to be reliable. Not just fairly reliable. We're talking 5 9's reliable, since the code will run all day every day.
2. It has to be safe. The equipment that it runs is very big, and very expensive. There can't be any unexpected outcomes, EVER. One unexpected run-time failure could cost more in damage than the coders will make over a career.
In times past, only requirements 1 and 2 were deemed important. Requirements 1 and 2 makes the code insanely expensive to write. That's why one company wrote it, but a lot of companies bought it to avoid risk (more of a myopic PHB problem since they can point fingers if it doesn't work, not realizing that mistakes at this level might be career ending/limiting). Up until recently, there wasn't a 3rd requirement.
3. It has to be secure. This is a new one, and one that the industry is failing at getting resolved. Now an entire industry has to review all of its previous code that was already insanely expensive, and now it costs even more to review and develop so the systems can get cheaper using off the shelf parts.
There is also a big disconnect here between equipment manufacturing companies and equipment operating companies?
The risk belongs to the equipment operating companies. They have to buy equipment from only a handful of manufacturers, and no matter what they demand, they can only get the canned solution from those manufacturers. If it doesn't meet all the 1-3 requirements, who are they going to be able to complain to?
On the equipment manufacturing company side, the coders can't get the PHB's to spend money on the additional development if it doesn't make money.
The big problem is that the power in this ecosystem favors the manufacturers, but the pressure to make things more secure is being placed on the operators.