This article is more than 1 year old
What do Tensor Flow, Caffe and Torch have in common? Open CVEs
Sooner or later, dependency hell creates a problem for everyone
Dabblers with prominent artificial intelligence tools have been warned and/or reminded to check their dependencies because some have open vulnerabilities.
That warning came from Qixue Xiao and Deyue Zhang (from Quihoo's 360 Security Research Lab), Kang Li (University of Georgia) and Weilin Xu (University of Virginia), who together wrote that “deep learning frameworks are complex and contain heavy dependencies on numerous open source packages”
The three reached that conclusion after combing through the third-party packages used by the TensorFlow, Caffe, and Torch deep learning frameworks, and looking for any open bugs in those packages.
They found quite a few and wrote that the frameworks are susceptible to denial-of-service, evasion attacks, or system compromise.
Noting that this work stands as a preliminary study (The Register expects this means there's more to come), they still found a total of 15 vulnerabilities in the three frameworks.
The largest number of bugs were found in the Open Source Computer Vision (opencv
) code base: eleven CVEs in all, exploitable across all three attack classes. opencv
was found to be present in both Caffe and Torch.
Caffe also builds with vulnerable builds of the libjasper
image manipulation library, and the OpenEXR
image viewer.
An interesting aspect of the bugs the researchers found is in their discovery of how an out-of-bounds write could be exploited to trick the AI (rather than hosing it or using it for remote control execution): in opencv
“the data pointer could be set to any value in the readData function, and then a specified data could be written to the address pointed by data. So it can potentially overwrite classification results.”
The opencv
example is below.
bool BmpDecoder::readData( Mat& img ) { uchar* data = img.ptr(); .... if( m_origin &=& IPL_ORIGIN_BL ) { data += (m_height - 1)*(size_t)step; // result an out bound write step = -step; } .... if( color ) WRITE_PIX( data, clr[t] ); else *data = gray_clr[t]; .... } index 3b23662..5ee4ca3 100644 --- a/modules/imgcodecs/src/loadsave.cpp +++ b/modules/imgcodecs/src/loadsave.cpp + +static Size validateInputImageSize(const Size& size) +{ + CV_Assert(size.width > 0); + CV_Assert(size.width <= CV_IO_MAX_IMAGE_WIDTH); + CV_Assert(size.height > 0); + CV_Assert(size.height <= CV_IO_MAX_IMAGE_HEIGHT); + uint64 pixels = (uint64)size.width * (uint64)size.height; + CV_Assert(pixels <= CV_IO_MAX_IMAGE_PIXELS); + return size; +} @@ -408,14 +426,26 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 ) // established the required input image size - CvSize size; - size.width = decoder->width(); - size.height = decoder->height(); + Size size = validateInputImageSize(Size(decoder->width(), decoder->height()));
TensorFlow came off lightly by comparison, with DoS-able versions of two Python packages, numpy
and wave.py
.
Here's the full list of vulns the researchers detailed.
Framework | Package | CVE | Threat | Fixed? |
---|---|---|---|---|
TensorFlow | numpy | CVE-2017-12852 | DoS | No |
TensorFlow | wave.py | CVE-2017-14144 | DoS | No |
Caffe | libjasper | CVE-2017-9782 | Heap | No |
Caffe | openEXR | CVE-2017-12596 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-12597 | Heap | Yes |
Caffe/Torch | opencv | CVE-2017-12598 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-12599 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-12600 | DoS | Yes |
Caffe/Torch | opencv | CVE-2017-12601 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-12602 | DoS | Yes |
Caffe/Torch | opencv | CVE-2017-12603 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-12604 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-12605 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-12606 | Crash | Yes |
Caffe/Torch | opencv | CVE-2017-14136 | Integer | Yes |
In some cases, as you can see in the table above, it's not the frameworks' fault, because the package developers haven't provided a patch yet.
That will still be enough to make trouble for some who charge ahead with AI experiments without first considering security. ®