Artificial intelligence has long been a source of antagonism and “what-if” ponderings in sci-fi media. In recent years, the real-world advances of AI have brought those musings to a head, with real concern over the capabilities of artificial intelligence and its potential effect on humanity and society. A recent study has done nothing to quell these concerns, as a machine learning algorithm integrated with a robotics system was shown to not only make sexist and racist conclusions about people but physically manifest those harmful stereotypes in the study environment.
In the artificial intelligence study, the researchers employed a neural network called CLIP to train the algorithm. CLIP pulls from a large internet database of captioned images. This machine learning model was then linked with a robotics system called Baseline, which uses a robotic arm that can manipulate objects in a virtual or physical space. The robot was tasked with putting block objects in a box. These blocks would have the face of an individual on them, with varying genders, races, and ethnicities used in the study. The artificial intelligence would then be asked to put the blocks in a box that matched the given description.
......
The views and opinions expressed in the article are solely those of their authors, and do not necessarily reflect the opinions and beliefs of WomenInScience.com.
Continue Reading