Questioning Augmented Video Surveillance

Modern surveillance systems are getting better, which is raising ethical questions around how they are trained and used

Context

Surveillance systems as we know them are used for live monitoring by operators or post-hoc verification in case of accident or theft. However, adding data-driven analytics to these systems (where the cameras are already installed) allows for breakthrough advances in surveillance and detection. These methods are able to reduce response time and greatly increase efficiencies. For instance, Vaak, a Japanese start-up, has developed systems that monitor suspicious attributes among shoppers and alert retail store managers through smartphone notifications. The goal here is prevention and the 77% drop in shoplifting losses attests to the system’s accuracy. Usually, if a suspicious target is approached and asked if they need any help: that attention is enough to curb the probability of theft.

What’s new

The potential advantages of these techniques are indubitable. However, if trained with biased data or processes sensitive information, it remains obvious that this technology can be dangerous if put in the wrong hands.

Recently, a surveillance system in Buenos Aires has sparked a lot of debate because it mixes personal information with criminal records. Furthermore, it uses personal information about minors, which goes against the international Convention of the Rights of the Child (ratified by Argentina in 1990).

child video surveillance digest

Source: Getty Images

The system matches two databases: one with outstanding arrest warrants and another with an image database of people’s faces. IT has led to the arrest of up to 595 suspects per month with low False Positive rates (5 of the 595 that particular month).

Why it matters

Video surveillance systems have very strong potential for both companies and governments. However, Buenos Aires’ system violates international human rights law. Furthermore, it uses criminal records as training data, which is known to be highly biased partly due to historical systemic inequalities.

What’s next

All AI solutions, but especially those used for surveillance purposes, need clean and compliant data to ensure equal social outcomes and fairness in real-life applications. There remains a clear lack of large scale governance and standardized ethical frameworks in high-stake Machine Learning solutions.

AI for Healthcare

Healthcare is being transformed by a rise in the adoption of data-driven solutions

Context

The market for the Internet of Medical Things (IoMT) is expanding rapidly. From glucose monitors to MRI scanners, sophisticated sensors are increasingly being matched with AI-powered analytics. The IoMT market, which Deloitte estimates will grow to 158.1 billion USD by 2022, is undertaking a mission to improve efficiency, lower care costs, and drive better health outcomes in healthcare using data-driven insights.

Recently, COVID has highlighted the need for remote patient monitoring. In order to lower hospital readmissions and emergency visits, the large majority of healthcare providers are beginning to invest in remote systems. These allow for monitoring essential health metrics for at-risk patients without needing a visit to the doctor.

What’s new

Last week, the US Center for Medicare & Medicaid Services (CMS) stated it would start reimbursing for the use of two novel Augmented Intelligence systems. The first, called IDx-DR, can diagnose diabetic retinopathy, a diabetes complication that can cause blindness, using retina scans.

digest healthcare ct scan

Source: Viz.ai

The second is a software developed by Viz.ai called ContaCT. The system can alert a neurosurgeon when a CT scan shows evidence that a patient has a blood clot in their brain. Rapid diagnosis is essential in these situations as saving a couple of minutes can dramatically reduce potential disabilities. Results show that Viz ICH is 98% faster than the standard of care.

In other news, a real-world oncology analytics platform called Cota Health has recently raised $10 million. Organizing fragmented real-world data, their solution can gain insights into cancer treatments and care delivery variation.

Why it matters

The willingness to pay for the standardized use of AI tools is great news for other companies working on medical AI products. It should be noted, however, that these solutions are not replacing healthcare workers. Instead, the solutions provide augmented intelligence that allows the workers to spend more time on essential tasks. These data-driven analytics act as a support system for healthcare, enabling a more informed decision-making process.

What’s next

While the increasingly-connected environment of IoMT brings a lot of advantages and increases the efficiency of many processes, one must not forget the new security risks that come along with them. These sophisticated sensors act as edge-devices in their respective networks. As such, they open up new vulnerabilities for cybercriminals to exploit.

In general, it is paramount for the future of healthcare that adequate AI governance is put in place to mitigate these drawbacks. Furthermore, there are many other factors such as data privacy, fairness, and ethics that come into play when deploying data-driven solutions to real-world high-stake environments.

Hum to Search

Google has revealed the Machine Learning technology behind their Hum to Search feature

Context

It is no surprise that Google’s Now Playing and Sound Search features are powered by Machine Learning approaches. Released in 2017 and 2018 respectively, they use deep neural networks to identify songs by using your device’s microphone. While the aforementioned features are accurate at finding played songs in multiple settings and environments, you still couldn’t find the song responsible for that melody stuck in your head! Frustrating, especially when research suggests the best way to get rid of an earworm is by listening to the song in question.

What’s new

Google released a Hum to Search feature in October. Just last week Google researchers responsible for the feature have uncovered the Machine Learning behind their technique.

As you might imagine, a studio recording is quite different from a hummed song. Often, the pitch, tempo, and rhythm vary significantly between the two. Fortunately, using existing knowledge from having created the older features, researchers know how to spot the similarities between spectrograms.

spectrogram humming digest

Source: Google AI Blog

Using this knowledge coupled with a state-of-the-art Deep Learning retrieval model, they were able to match millions of songs to hummed melodies with decent accuracy.

Click here to learn more about the method and additionally how the researchers tackled the challenge of obtaining enough training data.

Why it matters

This recent development shows that useful data can be extracted from sound recordings. Applying Machine Learning algorithms to processes that involve sound in other environments can be of immense use. Recently, Visium has developed a sound-based solution called ListenToMachines, helping Nestlé tackle predictive maintenance in factories.

Sign up to get the digest directly in your inbox!

Arnaud Dhaene

Author Arnaud Dhaene

More posts by Arnaud Dhaene

Leave a Reply

*

code

Cookies / Privacy Policy

Visium 2020
Developed in Switzerland