4

Protected by new data protection law, Chinese netizens can now say no to algorit...

 3 years ago
source link: https://en.pingwest.com/a/9112
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Protected by new data protection law, Chinese netizens can now say no to algorithms- PingWest

Protected by new data protection law, Chinese netizens can now say no to algorithms

Chen Du

posted on August 23, 2021 1:36 pmEditor : Wang Boyuan

Algorithms are no longer safe behind black boxes, as new law orders companies to give users options and be transparent.

In the Season 2 premiere of the dystopian drama series Black Mirror, a woman decided to upload all of her late husband’s data to an uncannily similar humanoid. The episode’s storytelling primarily revolves around the futility of holding on to what’s gone for good. But it did so with the involvement of an excellent techno-dystopian topic that already has people worldwide fiercely debating: data, as the “DNA” of our identities in the digital age.

What troubles many people is that as we continuously participate in all kinds of digital activities, tech companies accumulate so much of our data and use them to build algorithms that they think better cater to our preferences. But in reality, these algorithms and the companies became so powerful that they effectively dictate what information we get to see. Additionally, these algorithms work in positive feedback loops, meaning that we are living in echo chambers and only going to see more of what we are fed in the end.

Legislators and governments worldwide are seeing this problem more and more clearly, and are working to solve it. Among them, China just became the most aggressive one, with a new personal data protection law just passed last week and set to go into effect in November this year.

The People’s Republic of China Personal Information Protection Law (the law) categorizes “personal information” as identified and/or identifiable information about natural persons, whether this information is digital or physical. It becomes clear that the law’s primary focus is personal data.

The law then adds several clear-cut protections to users. Examples:

  1. Users should be given the option to decline automated, personalized information push and commercial marketing or opt for obfuscated pushing that does not target their identifiable profiles.
  2. Service providers should acquire users’ specific consent when dealing with sensitive personal information, such as biometrics, medical health, financial accounts, location history, etc.
  3. The law also states clearly that services failing to adhere will be ordered to shut down temporarily or even cease operation.

It is even more apparent that the law is explicitly targeting algorithms.

People have been wondering if algorithms have sinned from the beginning. Some technologists argue that an algorithm is a purely technical existence, which is neutral by design. They even state that the designers do not even understand to a sufficient level how their algorithms work most of the time.

Meanwhile, naysayers suggest that algorithms are clearly, purposefully designed to create echo chambers, as they literally can not work at the best of their capabilities without positive feedback loops. These feedback loops exist in forms such as luring users to click more, staying in apps for extended periods, and even big data discriminatory pricing. 

Different people have different views regarding whether algorithms are good or bad, and that’s quite okay. However, it should be commonly agreed that many of today’s big techs design algorithms so that they benefit themselves rather than the users. They provide a free, convenient service to users on the condition that users agree to their terms, among which bad faith ones are rampant. This cleverly designed business model results in users getting used to delegating their data to the big techs at the cost of their identities, privacy, and digital freedom.

In this sense, the law finally adds much-needed protection to users. 

Article 16 of the law states that service providers are not allowed to deny services should users choose to withhold full access to their personally identifiable data (although services that rely on these data are exempted), protecting users against bad faith terms.

Meanwhile, Article 24 explicitly targets big tech’s big data discriminatory pricing behaviors, ordering that service providers should not price users differently based on their data and different profile.

As for personalized pushes that may be confusing to some users, the law also states that individual users retain the legal rights to get transparent clarifications from service providers with regard to why they are being fed with certain information.

Also importantly, the law sets a harsh penalty for tech companies: a maximum of 50 million RMB or 5% of the previous year’s revenue in fines should companies be found to have seriously broken the law. Such lawbreakers can also be subject to rectification, removal of business licenses, and even total shutdown. At the same time, executives in charge can be personally fined between 100,000 and 1 million RMB and barred from similar posts.

The public sees the law as a monumental step in the Chinese government’s ongoing war against big techs and algorithms. Its draft already went live last year, and the passed version followed a series of highly rapid and harsh crackdowns against big techs carried out by the powerful apparatus of the Chinese government, which was quite nerve-wracking for the industry during the first half of 2021. But the end-users can finally look forward to a future where their data privacy and digital identities are more comprehensively and stringently protected.

Top image credit: NPC


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK