After the effects of London attack on March 22nd, Google has also come under fire from major companies whose advertising appears alongside videos uploaded by Jihadis, as well as other distasteful content. Since then Google has been thinking of ways to ban inappropriate contents from its viewers. The recent researches prove that Google has come up with a technology where the systems can identify those offensive contents which can be prevented from displaying in the site.
There are systems which can identify the context and then which can identify those contents to be removed. The tech giant has long been condemned for its inability to control what is posted on its online platforms. This would hopefully come to an end with this new technology.
Google now wants the computers which monitor the content being uploaded through YouTube and other channels to understand the nuances of what makes a video offensive. The company is giving its systems human-vetted examples of safe and unsafe content as a reference point. The contents are divided into fragments and each of these is analyzed individually. It can even listen to the audio related to that, can read the descriptions and decide whether the content is ideal to be uploaded. If not it can be eliminated.
“Computers have a much harder time understanding context, and that's why we're actually using our entire latest and greatest machine learning abilities now to get a better feel for this.”
The new beer in Singapore is made from recycled toilet water
Flood-resistant house: A new invention of Japan
India to delay strict new VPN rules by three months
India’s NTPC weighs new coal plants as coal phase-out in the country waits
India’s ShareChat backed by Google in $300M funding round
© 2022 CIO Bulletin. All rights reserved.