Does my model know your phone number?

Recent work from researchers at Google, Stanford, UC Berkeley, OpenAI, Harvard, and Apple researches privacy considerations in Large Language Models

Context

Machine Learning models are trained using large amounts of data. This is a no-brainer. However, the data used for training can sometimes contain sensitive information, especially in specific industries such as Healthcare or Finance. As Google’s People AI Research (PAIR) group explains, “if they’re [the Machine Learning models] not trained correctly, sometimes that [sensitive] data is inadvertently revealed.” For an interactive and intuitive explanation, I highly recommend checking out PAIR’s article entitled “Why Some Models Leak Data”.

What’s new

In a recent paper by renowned AI institutions, researchers attempt to extract training data from large language models using an adversarial method called “extraction attack”. The method contains two important steps:

  1. Generation of a large number of samples by interacting with the Large Language Model as a black-box. This is done by feeding the model prompts of text and collecting the model’s output samples.
  2. Selection of the samples that have an abnormally high likelihood. More specifically, what was done in the paper was the comparison of the likelihood of sample using a large version of GPT-2 compared to a smaller version of GPT-2. The grounds for this approach are that smaller models (those with a smaller number of parameters) are less prone to memorization.
digest

Source: Google AI Blog

The selected samples are then manually searched for on the web to check if they can be found verbatim. If this is the case, the representative researcher from OpenAI can indicate the number of training documents that include the sample.

The paper found 604 (out of 1’800 selected samples) that contain verbatim reproduced text that can only be found in one document in the training data.

These memorized samples include personally identifiable information (names, phone numbers, and email addresses), JavaScript code, log messages, 128-bit UUIDs, and others.

Source: BAIR Blog

It is however important to note that in most of these cases, the unique document that contains the training example contains multiple instances of the memorized sample. This is mentioned not only on the Google AI Blog but also in an in-depth paper explanation video by Yannic Kilcher.

Why it matters

Extracting training data from models that use private data for training can be extremely harmful. While the training data from the model studied in the paper is public, it raises serious questions concerning data privacy. Misuse of personal data can present serious legal issues.

At the moment, there is a legal grey area as to how data privacy regulations like the GDPR should apply to Machine Learning models. For instance, users have the right to be forgotten. Internet service users are allowed to request that the maintainer of the service delete all the personal data they have gathered on them. Does this mean companies will need to retrain their models from scratch every time a user invokes this right? Even when training these models costs upwards of several million USD?

As posted on the Berkeley AI Research Blog, “The fact that models can memorize and misuse an individual’s personal information certainly makes the case for data deletion and retraining more compelling.”

What’s next

So, Large Language Models can sometimes memorize training data. In some cases, this memorization is problematic, as it can lead to ethical and legal consequences. What do we do? Who is responsible for preventing such issues?

A common response to such issues is the use of Differential Privacy. As explained by PAIR: “Training models with differential privacy stops the training data from leaking by limiting how much the model can learn from any single data point. Differentially private models are still at the cutting edge of research, but they’re being packaged into machine learning frameworks, making them much easier to use.”

Yet, it seems that applying differential privacy in a principled and effective way is difficult when it comes to preventing the memorization of data found on the web. Particular examples are information snippets that occur multiple times in the same document, or copyrighted information such as complete books.

This raises the following question: perhaps training models with the entire content of the internet is a bad idea in the first place? The corpus of the internet is not sanitized, it raises immense privacy and legal concerns, and it contains significant inherent biases. The researchers explain that the better way forward could be a better curation of the dataset used for training. They state that “if even a small fraction of the millions of dollars that are invested into training language models were instead put into collecting better training data, significant progress could be made to mitigate language models’ harmful side effects.”

What does an armchair in the shape of an avocado look like?

Microsoft-backed research institution OpenAI shows impressive progress in text-to-image synthesis

Context

If I were to ask you what the important AI model advances in 2020 were, your answer would most likely include some of the following: Generative Models, Transformers for text (GPT-3), and Transformers for Images (ViT, Image GPT), and Transformers again (AlphaFold 2).

It was only a matter of time before one of the big players decided to merge all of these topics together and create a large scale text-to-image model.

What’s new

It comes as no surprise that OpenAI, the tech giant responsible for GPT-3 and Image GPT, has taken on the challenge of creating large models that work with text-image pairs. Last week, the company posted two blog posts introducing two such models: DALL·E and CLIP. The former is a model that leverages a reduced version of GPT-3 (using 12 out of its standard 175 billion parameters) and is trained to generate images from text descriptions. The latter is a different neural network trained to learn visual concepts from natural language to classify images in a “zero-shot” manner, meaning the classes are only observed at inference time, not during training.

As you might have guessed, DALL·E is a transformer language model. Using a (presumably large) training datasets of text-image pairs, the model receives text and image as a single stream of data in a tokenized manner. This procedure as well as the tokenization allows for image generation from scratch.

While OpenAI has yet to publish a paper explaining the theoretical details behind DALL·E, the blog post allows us to make some educated guesses regarding the models architecture. When looking at the references made to other research papers in the side-notes, it seems that the model is a combination of GPT-3 and a Vector Quantized-Variational AutoEncoder (VQ-VAE).

As hypothesized in a Explanation Video by Yannic Kilcher, the custom scaled-down GPT-3 model would be responsible for taking the text input and transforming it into a sequence of tokens that adhere to a specific vocabulary. The objective here is that this sequence is a sensible latent representation of the image. The decoder part of a VQ-VAE model can then use this sequence of tokens to generate the image.

The results found in the blog post are very impressive. OpenAI has cherry-picked some textual inputs that you can modify in this part of the blog post. I highly recommend you to go play around with their examples to get a good grasp of how well the models performs.

Source: OpenAI Blog

While the model is excellent at reproducing local information (such as different styles, textures, and colors), it is less accurate when it comes to global information (such as counting and ordering objects, temporal and geographical knowledge).

Why it matters

The business potential of text-to-image use-cases is immeasurable. Particular fields like stock photography and illustration are the first that come to mind. While the use of Transformers comes as no surprise, the impressive results consolidate the reason behind the trend. This work is a very important milestone in the text-to-image synthesis area of research, which has been around since only 2016.

What’s next

The researchers state that they “plan to provide more details about the architecture and training procedure in an upcoming paper.” Future research will tackle “how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology,” the team wrote.

To play around with DALL·E yourself, check out OpenAI’s blog post.

Deep Learning Tumor Contouring deployed in Addenbrooke’s Hospital

A Microsoft AI tool has been deployed in a Cambridge Hospital to help speed up cancer treatment

Context

The potential impact of Deep Learning solutions on augmenting the imaging workflow in healthcare is immense. As we’ve seen over the past years, the technology needed exists. Computer Vision algorithms consistently achieve high performances in object detection and image classification tasks when constrained to specific domains. The main obstacles are (1) small training sets, (2) data privacy and security compliance concerning the use of patient scans, (3) approving ethical considerations, and (4) finding a seamless integration method of such Deep Learning models into health professional’s daily workflow. In fact, the very large majority of health care solutions leveraging Deep Learning methods remain in research labs.

What’s new

After working on a pilot version of a CT scan tumor highlighting software with the Microsoft Research Lab in Cambridge for eight years, Addenbrooke’s Hospital will use the solution called InnerEye, in practice.

The solution leverages a Neural Network to contour tumors and healthy organs on 3D CT (Computed Tomography scans. This lengthy procedure (several hours) is usually performed by very specialized health professionals. It is an extremely important part of a patient’s cancer treatment as these contours are used to guide high-intensity radiation beams whose objective is to damage the DNA of cancerous cells, all while avoiding the surrounding healthy organs.

Source: Microsoft AI Blog

Trained on the hospital’s own data, InnerEye is able to perform this contouring task 13 times faster than a human. As stated by Dr. Raj Jena, Oncologist at Addenbrooke’s, “the results from InnerEye are a game-changer. To be diagnosed with a tumor of any kind is an incredibly traumatic experience for patients. So as clinicians we want to start radiotherapy promptly to improve survival rates and reduce anxiety. Using machine learning tools can save time for busy clinicians and help get our patients into treatment as quickly as possible.”

The whole procedure is integrated into an oncologist’s routine using the augmented intelligence framework. In fact, the contouring done by InnerEye is checked and confirmed by a clinical oncologist before the patient receives treatment. “The AI is helping me in my professional role; it’s not replacing me in the process. I double-check everything the AI does and can change it if I need to. The key thing is that most of the time, I don’t need to change anything” says Yvonne Rimmer, a Clinical Oncologist at the Hospital.

Why it matters

The deployment of Deep Learning in an oncologist’s daily routine has never been done before. In doing so, Addenbrooke’s Hospital is the first hospital in the world to successfully leverage this type of ground-breaking technology in order to improve survival rates for some cancers.

In a country where up to half of the people are diagnosed with cancer at a certain stage in their life, such technologies will allow doctors to treat patients faster.

What’s next

The goal of Microsoft’s InnerEye project is to “Democratize Medical Imaging AI”. As such, they have made the code available online. In the case of Addenbrooke’s, the Deep Learning models are hosted on Microsoft Azure, ensuring that all data is securely kept in the UK and available only to the oncologists who need it.

This highlights an important aspect with respect to this deployment. While the software has been open-sourced by Microsoft, its clinical use remains subject to regulatory approval. Addenbrooke’s is a medical center renowned internationally for dealing with rare and complex conditions needing cutting-edge facilities and equipment as well as the best doctors. In that regard, it comes as no surprise that the Hospital is at the forefront of innovation in healthcare. It does, however, raise the question: when and where can we expect widespread use of such technologies?

Don’t forget to subscribe!

If you want to receive a summarized version of the Visium Digest in your inbox every two weeks, subscribe below!

Arnaud Dhaene

Author Arnaud Dhaene

More posts by Arnaud Dhaene

Leave a Reply

*

code

Cookies / Privacy Policy

Visium 2020
Developed in Switzerland