Software

AI + ML

Boffins try to grok dogs using AI, a cyber-brain charter, a bot running for mayor, and more

Would you vote for a machine for public office?


Roundup Here are a few bits and pieces from this week's news in AI. Researchers have collected a dataset to analyze dog behaviour using neural networks, the first AI-assisted medical device for diagnosing diabetic retinopathy has been approved by the FDA, and, finally, an AI is running for mayor in Japan.

Who’s a good doggo? A team of researchers have developed a machine learning model that attempts to predict and understand dog behaviour.

They attached sensors and a GoPro camera to a dog to collect video data, a canine is an Alaskan Malamute called Kelp M. Redmon. The clips show Kelp interacting with the environment around it with a dog's eye view. Image stills from the video feed are then fed into a convolutional neural network as inputs and act as an embedding for a LSTM (long-short term memory network).

The LSTM processes the features of each progressive image from the clip over each time step, and is trained to predict the next frame. For example, it might be given images of a human throwing a ball that bounces past Kelp, and the neural network guesses that she will scramble and move right for the ball.

In a paper published on arXiv, the researchers from the University of Washington and the Allen Institute for AI, said the work was “a first step towards end-to-end modelling of intelligent agents. This approach does not need manually labeled data or detailed semantic information about the task or goals of the agent.”

Dogs obviously rely on a lot more than vision to navigate the world. The researchers hope to include more sensory data such as smell or touch. It’s also limited to one type of dog, and are interested to see if their work maps to multiple dogs across different breeds.

“We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world,” the paper includes.

The paper will be presented at the Conference of Computer Vision and Pattern Recognition (CVPR) later this year in June.

DeepMind gets a new COO - DeepMind have employed Lila Ibrahim as its first chief operating officer, it announced on Wednesday.

Ibrahim began her career in technology working for Intel as a microprocessor designer, assembler programmer, business development manager and rose to be chief of staff to its CEO & Chairman, Craig Barrett. She also was president and COO of Coursera, a company focused on education offering a variety of courses online.

She will work alongside DeepMind’s co-founders: Demis Hassabis, CEO; Shane Legg, chief scientist; and Mustafa Suleyman, head of applied AI.

FDA approves AI medical gizmo for diabetic retinopathy The US Food and Drug Administration have given the green light to the first medical AI device that uses algorithms to detect diabetes in retinal scans.

The company, IDx LLC, developed the tool known as IDx-DR. The FDA found it could detect mild diabetic retinopathy to an accuracy of 87.4 per cent, and was able to identify patients who did not the disorder to an accuracy of 89.5 percent.

“IDx-DR is the first device authorized for marketing that provides a screening decision without the need for a clinician to also interpret the image or results, which makes it usable by health care providers who may not normally be involved in eye care,” the FDA said.

It means IDx can now sell its devices to hospitals and clinics. Retinopathy is a well-known area in medicine and AI. Even Google has taken a stab at the problem, and have used machine learning to tell a patient’s risk of heart disease and even if they’re a smoker or not from retinol scans to a decent accuracy.

Fancy a trip to Korea? If you’re a pretty good at TensorFlow and deep learning and want to get away, then maybe consider applying to Deep Learning Camp Jeju at Jeju Island, Korea.

The month-long bootcamp will let you work on a deep learning project with mentors surrounded by about 20-30 participants. If you get accepted, you’ll get a $1,000 stipend (£811.50) $300 (£243.45) towards your flights and $1,000 worth of Google Cloud credits with access to its TPUs.

No visas are required if you plan to stay less than 30 days. The event is organised by TensorFlow Korea, and is a push to advance deep learning in Korea.

Previous projects have included computer vision research self-driving cars, recommender systems, and GANs. It’s all sounds pretty sweet, and you can apply here.

OpenAI’s AGI strategy OpenAI published a charter to help guide its long-term mission of creating artificial general intelligence (AGI).

AGI is a contentious topic. Some believe the world is deathly close to developing crazed killer robots (looking at you Elon and the now deceased Hawk), others believe it’s a useless term, some think it’s an impossible feat.

The charter is pretty interesting, nevertheless. It’s the first time a major AI research lab has declared it will stop its work in creating AGI if another project gets there first, on the condition that it won’t be used maliciously.

“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years,” it said.

It also warned that as safety and and security issues escalate as AI progresses, it might have to be more careful about publishing research so openly in the future.

“We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.”

Other points on the charter include the usual announcements around building safe AI that will benefit humanity.

EU member states sign AI deal - Twenty-five countries under the European Union signed a “Declaration of cooperation on Artificial Intelligence”.

The deal promises to work together on the most pressing issues in AI, including ethical and legal issues, competitiveness in research, and where and how it should be deployed. It means that there should be more funding for research, development, and industry.

Austria, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France, Germany, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, UK, and Norway, all signed the agreement.

Assess your algorithms The AI Now Institute at New York University have published a framework to help companies and public agencies assess the impact of its algorithms.

The Algorithmic Impact Assessment (AIA) report can be summed up in five points:

  • 1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities.
  • 2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time.
  • 3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired.
  • 4. Agencies should solicit public comments to clarify concerns and answer outstanding question.
  • 5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct.

Although the AIA is inspired by other impact assessment such as environmental protection, data protection, privacy, or human rights, it can’t be legally enforced. So it relies on the good nature of organizations.

Despite this, Jason Schultz, professor of clinical law at NYU and a senior advisor on technology policy in the White House under Obama, told The Register, he does believe many companies will happily audit their own algorithms.

“The pressure for algorithmic accountability has never been greater, especially for public agencies. We believe that it’s urgent that public agencies begin evaluating algorithmic decision-making with the same level of scrutiny as these other areas [such as ] environmental effects, human rights, data protection, privacy, etc."

"And lawmakers are finally beginning to take this issue seriously. So I would anticipate many agencies adopting these frameworks voluntarily or with minimal policy interventions. Otherwise, they run huge risks of inflicting harms on the very people they are meant to serve and ultimately undermining public trust in government to help improve our lives with new technologies."

“Ultimately, this will require a multi-pronged approach. Legislation will likely be part of that. We also believe it’s imperative for companies developing these systems to take responsibility for providing transparency and ensuring that they do not create unintended harm.”

The AI Institute also calls for an “independent, government-wide oversight body” to take on third-party auditing to avoid any conflicts of interest.

An AI overlord In other news: An AI is running for mayor in Tama City, Japan.

Michihito Matsuda is a unique mayoral candidate. He? She? It? really is different from all the other politicians. (Matsuda, dressed all in silver, has quite feminine features so El Reg will refer to as her for now). Matsuda isn’t even human for god’s sake, but her supporters are.

Tetsuzo Matsumoto, a senior advisor to Softbank and Norio Murakami, an ex-representative for Google Japan are fans apparently, according to Otaquest.

Remember when Saudi Arabia granted Sophia, a bald, creepy robot citizenship? This could the beginning of the end for politicians, wouldn't that be a shame. ®

Send us news
16 Comments

AI researchers have started reviewing their peers using AI assistance

ChatGPT deems your work to be commendable, innovative, and comprehensive

CNCF boss talks 'irrational exuberance' in an AI-heavy Kubecon keynote

Kubecon? More like Queuecon as Paris show's registration system experiences temporary borkage

Can AI shorten PC replacement cycles? Dell seems to think so

Might be wishful thinking from finance exec following two-year computer sector recession

Samsung preps inferencing accelerator to take on Nvidia, scores huge sale

PLUS: Tencent's profit plunge; Singtel to build three AI datacenters; McDonald's China gobbles Microsoft AI

Nvidia rival Cerebras says it's revived Moore's Law with third-gen waferscale chips

Startup is also working with Qualcomm on optimized models for its Cloud AI 100 Ultra inference chips

AMD hires former Oak Ridge chief to punt AI to governments

You get a 'sovereign' AI, and you get a 'sovereign' AI, everybody gets a 'sovereign' AI

Dell adds Nvidia's next GPUs to its portfolio of AI platforms

Nvidia is a kingmaker, and who wouldn't want to be king?

Homeland Security will test out using genAI to train US immigration officers

It's all about privacy, civil rights, civil liberties, ok?

In the rush to build AI apps, please, please don't leave security behind

Supply-chain attacks are definitely possible and could lead to data theft, system hijacking, and more

Microsoft says AI alliances are needed to compete with Google

Only the Chocolate Factory is 'vertically integrated' to win at 'every AI layer from chips to a thriving mobile app store'

Investment advisors pay the price for selling what looked a lot like AI fairy tales

SEC bags $400K in settlements

From quantum AI to photonics, what OpenAI’s latest hire tells us about its future

What's good for quantum optimization could help make models leaner