Robots fight for our Climate
Recycling robots leverage the power of AI to make the cleaning of single streams of waste financially viable
At the end of 2017, China instated to close off the import of recycled waste. As a response, western countries–which used to be the main waste exporters to China–were forced to strengthen their waste processing internally. With the rise of the implementation of IoT technology in the industrial setting in recent decades, it comes as no surprise that western countries have turned to robotic technologies to solve this problem.
Founded last year, a company from Louisville in Colorado, USA called AMP Robotics sells and leases AI-driven recycling robots. Raising $23 million in venture funding, the company has sold or leased 100 robots to more than 40 recycling plants across the globe.
The robots are able to work on a stream of recycling in which one can find paper, plastics, and aluminum mixed together. These streams, called single-stream recycling streams, are analyzed using proprietary computer vision techniques. Their impressive self-proclaimed 99% accuracy rate outperforms competing technologies such as optical sorters.
When compared to humans, who are able to pick up 50 pieces of waste per minute on average, robots can pick up 80.
Why it matters
After China’s ban, western countries’ recycling stream was not pure enough. AMP’s robots allow for a cleaner recycling output with more downstream market value.
AMP has already started working on new projects. Extending their reach from only single-stream recycling, they have started supporting handling waste from electronic as well as construction and demolition facilities.
How Facebook handles harmful content
Facebook AI Research reveals how Machine Learning is used to handle different forms of harmful content
As one of the leading social media platforms, Facebook is reliant on scalable and intelligent solutions to detect harmful content. The company has implemented a range of specific policies and products whose goal is to mitigate the spread of misinformation and harmful content on its platform. In short, these include (1) adding a warning to content that has been rated by third-party fact-checkers, (2) reducing the distribution of harmful content, and (3) removing misinformation if it can contribute to imminent harm.
The modern political environment is becoming increasingly polarized (which is partly due to social media platforms such as Facebook and the spread of misinformation). Therefore, it is interesting to see how large corporations such as Facebook handle scaling their efforts in detecting and mitigating the spread of harmful content and misinformation.
FAIR (Facebook AI Research) has recently developed two new Artificial Intelligence technologies to help protect people from hate speech. They claim to have proactively detected 94.7% of hate speech in Q3 2020 (compared to 80.5% in Q3 2019 and 24% in Q3 2017) using these new technologies.
The first technology is called Reinforced Integrity Optimizer (RIO) and allows the integration of real examples and metrics into training their classification models.
The second technology, called Linformer, decreases the computational requirement to train state-of-the-art models using the Transformer architecture. In fact, implementing a linear-time training algorithm makes it possible to use larger pieces of text to train these models. The code is available online.
Why it matters
Facebook currently uses both RIO and Linformer in production to analyze harmful content in many different regions around the world.
These gains in efficiency and coverage are paramount in dealing with hate speech and harmful content before it has a chance to spread.
In the long term, Facebook’s objective is to “deploy a state-of-the-art model that learns from text, images, and speech and effectively detects not just hate speech but human trafficking, bullying, and other forms of harmful content”. There is a long way to go before this objective becomes a reality, and as such users must remain wary of the adverse effects a social media platform such as Facebook can have.
Others will argue that these effects have nothing to do with inherently harmful content or hate speech itself. It can be noted that as long as internet platforms use intelligent systems wired to predict what informations will keep you scrolling and online instead of those you should be informed about, it is recommended to get news from other sources.
Ethical considerations for GANs
Ethical considerations of GANs arise in the face of improved portrait-generating technologies
GANs — or Generative Adversarial Networks are a powerful Artificial Intelligence tool that is able to generate new data that has strong statistical relations with the training dataset. This method is used for diverse applications such as the creation of synthetic datasets.
GANs have also been used to create stock photos with dummy faces. The goal is to demonstrate and promote diversity, which can in turn generate business opportunities. This particular use of GANs leads to numerous ethical implications.
Nowadays, GANs have gotten so powerful that their results are almost indistinguishable from real photographs. Additionally, one can now easily customize and edit the generated content. Modifying the shade of the skin or the color of one’s hair is possible at the click of a button.
The business opportunities resulting from this technological advance are endless. In fact, promoting ‘counterfeit’ diversity is now cheaper than ever before. Photo agency databases prevalently contain images of white men: minorities are severely under-represented. Using GANs, businesses can quickly produce thousands of fake photographs with high percentages of diversity. However, increasingly using fake diversity carries the risk of building a false illusion of diversity. This makes it easier to publicly promote an image of diversity all the while ostracizing minorities from your company.
From an ethical perspective, the use of this technology raises several discussions related to the validity of what can be found online, even from trustworthy sources.
Why it matters
GANs are considered one of the most powerful machine learning technologies available today. However, awareness should be raised concerning the ethical implications of these techniques. They are already being used, for instance, by people impersonating journalists on Twitter with generated profile pictures.
The progress that can be observed between 2014 and today is quite extraordinary. This begs the question: where will this technology stand 5 years from now? Increased interest and awareness concerning the ethical use of AI technologies are necessary. AI governance and accountability are important issues that must be confronted sooner rather than later.