- The problem of political disinformation, especially deepfakes, has drawn a lot of attention as India’s elections draw near.
- Artificial intelligence is used in deep fakes, which are images, movies, and voices that appear real but are actually created by the faker. These sights and sounds have the ability to trick and influence users.
What is Deepfake?
- Deepfakes refer to synthetic media or manipulated content created using deep learning algorithms, specifically generative adversarial networks (GANs).
- Deepfakes involve altering or replacing the appearance or voice of a person in a video, audio clip, or image to make it seem like they are saying or doing something they never actually did. The term “deepfake” is a combination of “deep learning” and “fake.
- Deepfake technology utilizes AI techniques to analyze and learn from large datasets of real audio and video footage of a person.
- The recommended countermeasure by the central government to combat political deepfakes is based on Rule 4(2) Information Technology Intermediary Guidelines 2021.
- According to this regulation, major social media messaging companies ought to be able to identify who the “first originator of the information” is on their platforms.
- Court orders or government action can be used to obtain originator information.
- This provision primarily targets end-to-end encrypted platforms, such as WhatsApp.
- Users’ privacy is protected by end-to-end encryption, which makes sure that only the sender and recipient can access the message content.
- Law enforcement organizations, however, face difficulties because they are unable to access messages.
- In order to deter crime, the proposal calls for giving each resident a “movement tag,” which would trace their every step when they leave their home.
- When such information is gathered and made available to the government, privacy issues are a concern.
- Rule 4(2) enumerates a number of justifications for putting the clause into effect, including threats to India’s sovereignty, security, public order, and sexual offenses carrying a sentence longer than five years.
- But there is still room for interpretation, particularly when it comes to “public order,” which can be used in a variety of contexts.
- Monitoring encrypted messages for small-scale problems could be considered an overly intrusive practice.
- The rule’s definition of the “first originator” of a message is still vague, raising concerns about its application and user consequences.
- People who cut and paste existing messages may unintentionally start a new trend of “new originators.”
- In addition, in order to prevent some possible miscreants, traceability requires the maintenance of logs of the origin of each communication, jeopardizing the privacy of all messaging users.
The Potential Floodgate of Message Traceability
- If Rule 4(2) is put into effect, it may unleash a torrent of communication traceability, compromising users’ privacy and perhaps undermining the goal of discouraging and punishing individuals who spread false information about politics.
- In combating political deepfakes and false information, the government needs to think about reasonable measures that preserve users’ privacy and basic rights.
In the digital age, striking a balance between the necessity to suppress false information and the defense of personal privacy continues to be extremely difficult.