Chemists bitten by Python scripts: How different OSes produced different results during test number-crunching

Boffins claim code was fine... when they wrote it

A bug in the code

Analysis Chemistry boffins at the University of Hawaii have found, rather disturbingly, that different computer operating systems running a particular set of Python scripts used for their research can produce different results when running the same code.

In a research paper published last week in the academic journal Organic Letters, chemists Jayanti Bhandari Neupane, Ram Neupane, Yuheng Luo, Wesley Yoshida, Rui Sun, and Philip Williams describe their efforts to verify an experiment involving cyanobacteria, better known as blue-green algae.

Williams, associate chair and professor in the department of chemistry at the University of Hawaii at Manoa, said in a phone interview with The Register on Monday this week that his group was looking at secondary metabolites, like penicillin, that can be used to treat cancer or Alzheimer's.

Yuheng Luo, a graduate student working with assistant professor Rui Sun, tried to verify some of the group's experimental results but found the results varied depending on the operating system being used.

"There's actually a problem with the code, to the point that it depends on which computer you're using," said Williams.

Luo had been using a set of Python scripts that interface with the Maestro molecular modeling environment. The scripts, described in a 2014 Nature Protocols article, were designed by Patrick Willoughby, Matthew Jansma, and Thomas Hoye to handle nuclear magnetic resonance spectroscopy (NMR), a process for assessing local magnetic fields around atomic nuclei.

When Luo ran these "Willoughby–Hoye" scripts, he got different results on different operating systems. For macOS Mavericks and Windows 10, the results were as expected (173.2); for Ubuntu 16 and macOS Mojave, the result differed (172.4 and 172.7 respectively). They may not sound a lot but in the precise world of scientific research, that's a lot.

The reason, it turns out, is not specific to Python; rather it's that the underlying system call to read files from a directory leaves the order in which files get read up to the OS's individual implementation. That's why sort order differs in different environments.

But in this instance, the issue should have been addressed in Python. In one of the Python scripts – nmr-data_compilation.py – for example, there's a function that reads files using the glob module:

def read_gaussian_outputfiles():
    list_of_files = []
    for file in glob.glob('*.out'):
        list_of_files.append(file)
    return list_of_files

As the Python documentation explains, "The glob module finds all the pathnames matching a specified pattern according to the rules used by the Unix shell, although results are returned in arbitrary order."

So the author(s) of the "Willoughby–Hoye" scripts should have defined the desired sorting behavior in code to ensure consistency.

In a Twitter post, Patrick Willoughby, assistant professor of chemistry at Ripon College, thanked Williams, Sun, and their colleagues for the find and suggested the scripts had worked properly in the past.

Python attacking

Mega-bites of code: Python snakes into 1st place for cyber-attacks

READ MORE

"When I wrote the scripts six years ago, the OS was able to handle the sorting. Rui and Williams added the necessary sort code and added a function to ensure the calcs were properly aligned," he said.

For this particular type of calculation, the order in which files get compared affects the results. And that's true for other experiments too, though Williams isn't certain how many. He estimates that 150 to 160 research projects at most could be affected, though he noted that last year only one paper explicitly acknowledged using these scripts.

He said that the experimental deviations didn't affect his group's research conclusions, but he allowed that an error of this sort could have a meaningful impact in different circumstances.

"The hope is that this paper gets us to talk a bit more about how we treat and view software that we exchange back and forth," said Williams. "We somehow naively assume this stuff will work, being experimentalists who don't have a lot of background in computer science." ®

Hat tip to freelance science journo Maddie Bender for first spotting the paper highlighting the code glitch.

Sponsored: Beyond the Data Frontier

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER




Biting the hand that feeds IT © 1998–2019