This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

Facebook bans ‘violent network’ of far-right boogaloo accounts

Facebook took action to remove a network of accounts Tuesday related to the “boogaloo” movement, a firearm-obsessed anti-government ideology that focuses on preparing for and potentially inciting a U.S. civil war.

“As part of today’s action, we are designating a violent US-based anti-government network under our Dangerous Individuals and Organizations policy and disrupting it on our services,” Facebook wrote in the announcement. “As a result, this violent network is banned from having a presence on our platform and we will remove content praising, supporting or representing it.”

In its announcement, the company made a distinction between “the broader and loosely-affiliated boogaloo movement” and the violent group of accounts it identified and we’ve asked Facebook to clarify how or if it will distinguish between the two moving forward.

On Tuesday, Facebook removed 220 Facebook accounts, 28 pages, 106 groups (some public, some private) and 95 Instagram accounts related to the network it identified within the boogaloo movement.

A Facebook spokesperson clarified that today’s actions don’t mean all boogaloo content will be subject to removal. The company will continue to focus on boogaloo activity that focuses on potential real-world violence, like the new cluster of content taken down. The new designation of some boogaloo networks as “dangerous organizations” does mean that Facebook will scan its platform for symbols connected to the accounts that meet that designation.

The company notes that it has been monitoring boogaloo content since 2019, but previously only removed the content when it posed a “credible” threat of offline violence, citing that the presence of that threat in its decision to more aggressively identify and remove boogaloo content.

“… Officials have identified violent adherents to the movement as those responsible for several attacks over the past few months,” the company wrote in its blog post. “These acts of real-world violence and our investigations into them are what led us to identify and designate this distinct network.”

Earlier this month, an Air Force sergeant found with symbols connected to the boogaloo movement was charged with murder for killing a federal security officer during protests in Oakland.

In an April report, the watchdog group Tech Transparency Project detailed how extremists committed to the boogaloo movement “[exchange] detailed information and tactics on how to organize and execute a revolt against American authorities” in Facebook groups, some private. Boogaloo groups appear to have flourished on the platform in the early days of the pandemic, with politicized state lockdowns, viral misinformation and general uncertainty fueling fresh interest in far-right extremism.

As the Tech Transparency Project report explains, the boogaloo movement initially used the cover of humor, memes and satire to disguise an underlying layer of real-world violent intent. Boogaloo groups have a mix of members with varying levels of commitment to real-world violence and race-based hate, but organizations studying extremism have identified overlap between boogaloo supporters and white supremacist groups.

Facebook’s action against the boogaloo movement come the same day that Democratic senators wrote a letter to the company demanding accountability for its role in amplifying white supremacy and other forms of far-right extremism. In the letter, addressed to Mark Zuckerberg, Lawmakers cited activity by members of boogaloo groups as part of Facebook’s “failure to address the hate spreading on its platform.”



from Social – TechCrunch https://ift.tt/2CXsXd9
via IFTTT

What 👁👄👁.fm means for Silicon Valley

In 36 hours, a diverse group of young entrepreneurs and technologists raised more than $200,000 for three charities supporting people of color and the LGBTQ community: The Okra ProjectThe Innocence Project and The Loveland Foundation.

How did they do it? Why did they do it?

The answers are important to understanding the future of tech. This is the first real example of how and why Gen Z will build companies. 👁👄👁.fm and the people behind it reflect broader trends in youth culture.

VCs should take note. These are the people who will build the next Facebook.

Everyone else should rejoice. Young technologists are building a new future on a new set of values. Their values are informed by the first-hand experience of growing up with the perverse incentives of yesterday’s social media and a genuine desire to create a better world — online and off.

It all began on Thursday night when a group of friends started riffing on a TikTok meme. In today’s world, language is constantly evolving — 👁👄👁 emerged as a particular spin on the phrase: “It is what it is.” Josh Constine explains, “👁👄👁 means you feel helpless amidst the chaotic realities unfolding around us, but there is no escape.”

The group of friends added the emojis to their Twitter handles and began tweeting about 👁👄👁.fm, a nonexistent invite-only social app. Unexpectedly, the trend started gaining momentum and the inside joke got out of hand. Conversations erupted on the group’s Discord server as they discussed what to do next. Could they channel the hype into impact?

Vernon Coleman, founder of synchronous social app Realtime and “Head of Hype” at 👁👄👁.fm reflected, “What started as a meme quickly gained steam! We realized the opportunity and felt that we had a responsibility to convert the momentum for social good. I think it’s amazing what can happen when skilled creatives get together and collaborate in real-time.”

Where should the team focus their efforts? The answer was clear. The group wrote in a post on Friday, ” … we didn’t have to think too hard: In this moment, there’s pretty much no greater issue to amplify than the systemic racism and anti-Blackness much of the world is only beginning to wake up to.”

Since Thursday, the group accumulated over 20,000 email sign-ups, more than 11,000 Twitter followers and raised over $200,000 in donations.

Cynics have called it a “well-executed marketing campaign” or suggested that it was an ill-intentioned prank. Not everything went perfectly, and the team has acknowledged the missteps. But, we shouldn’t trivialize or marginalize what they accomplished and why they did it.

In one fell swoop, the team chastised Silicon Valley’s use of exclusivity as a marketing tactic, trolled thirsty VCs for their desire to always be first on the next big thing, deftly leveraged the virality of Twitter to build awareness and channeled that awareness into dollars that will have a real impact on groups too often overlooked.

This group of 60 young tech leaders took the tools of the titans into their hands to make an impact while making a statement.

They weren’t the most connected people on Twitter. Many of the team have follower counts in the hundreds, not the hundreds of thousands. But, they understand the tools as well as the tech elite.

This is the latest in a string of movements created by Gen Z leaders and activists. Gen Z is able to amplify their voice — even on platforms, like Twitter and Facebook, considered the domain of millennials and Gen X.

We first saw this with the Parkland school shooting when high school students took over Twitter then Facebook then cable news to add a voice of reason to a gun debate that had devolved into partisan talking points.

Over the last three years, I’ve spent dozens of hours talking with young users and product builders — this has been an important part of my job as the chief product officer at Tinder, a product director on Facebook’s Youth team and an angel investor. Many of the sentiments expressed by the 👁👄👁.fm team reflect broader feelings in Gen Z:

Gen Z is tired of a boomer generation that seems more focused on reaping their last bit from the world than passing it on in better shape.

Gen Z is fed up with exclusive clubs and virtual velvet ropes. The latest example is Clubhouse, an invite-only social app that raised at a $100 million valuation despite being only a few months old and catering to only a few thousand users — among them Oprah and Kevin Hart.

For tech insiders, Clubhouse is the place to be. For Gen Z outsiders, it’s the latest example of Black celebrity being used to make predominantly white founders and investors rich.

Gen Z entrepreneurs and tech leaders are tired of a tech industry that talks about inclusivity, but then uses exclusivity as a marketing ploy. This has been a practice for more than a decade. It started with Gmail, the first app to use private invites at scale — a tactic widely copied.

Today, Silicon Valley insiders are clamoring for invites to HEY, a recently released email app that notoriously charges for two- and three-letter email addresses ($999 per year for a two-letter address and $375 for a three-letter address). The short name up-charge is a cynical money-making scheme from a company whose founders, Jason Fried and David Heinemeier Hansson, evangelize a fairer and more empathetic approach to technology. Critics have pointed out that their business model unfairly — and likely unintentionally — targets ethnic groups who have a tradition of shorter names.

Finally, Gen Z is tired of a tech industry that talks about diversity, but doesn’t practice it. Black and Hispanic people continue to be underrepresented at major tech companies, particularly at the leadership level. This underrepresentation is even worse for entrepreneurs. Just 1% of venture-backed founders are Black.

Silicon Valley isn’t trying hard enough.

“We hear repeatedly that there’s a pipeline problem in tech VC and employment … that’s bullshit. We were able to bring together different age groups, cultural backgrounds, skills, genders and geographies … all based on a random selection process of people putting a meme in their profile … the Valley should realize that you can literally throw darts and get results,” said Coleman. “If the industry is about that action imagine the magic we’d all create together.”

The story of 👁👄👁.fm highlights an important truth. If the tech industry doesn’t create the future Gen Z wants, there’s no need to worry. They’ll create it for themselves.

Will you help them?

Make the hire. Send the wire. — Tiffani Ashley Bell, founding executive director at The Human Utility.

The team behind 👁👄👁.fm supports:

  • The Okra Project — a collective that seeks to address the global crisis faced by Black trans people by bringing home-cooked, healthy and culturally specific meals and resources to Black trans people wherever we can reach them.
  • The Innocence Project — its mission is to free the staggering number of innocent people who remain incarcerated, and to bring reform to the system responsible for their unjust imprisonment — a plight that disproportionately affects people of color.
  • The Loveland Foundation — makes it possible for Black women and girls nationally to receive therapy support. Black women and girls deserve access to healing, and that healing will impact generations.


from Social – TechCrunch https://ift.tt/3ijm99O
via IFTTT

With advertiser boycott growing, lawmakers press Facebook on white supremacy

In a new letter to Mark Zuckerberg, three Democratic lawmakers pressed the Facebook chief executive for accountability on his company’s role in amplifying white supremacy and allowing violent extremists, like those in “boogaloo” groups, to organize on its platform.

Citing the “long-overdue” national reckoning around racial injustice, Senators Mazie Hirono (D-HI), Mark Warner (D-VA), and Bob Menendez (D-NJ) wrote to Zuckerberg in an effort to highlight the rift Facebook’s stated policies and its track record.

“The United States is going through a long-overdue examination of the systemic racism prevalent in our society. Americans of all races, ages, and backgrounds have bravely taken to the streets to demand equal justice for all,” the senators wrote.

“While Facebook has attempted to publicly align itself with this movement, its failure to address the hate spreading on its platform reveals significant gaps between Facebook’s professed commitment to racial justice and the company’s actions and business interests.”

The letter demands answers to a number of questions, some of which are relatively superficial asks for further commitments from Facebook to enforce its existing rules. But a few hit on something more interesting, calling on Zuckerberg to name the Facebook employee whose job explicitly addresses the spread white supremacy on the platform and asking the company to elaborate on the role that Joel Kaplan, Vice President of Global Public Policy and Facebook’s most prominent conservative voice, played in shaping the company’s approach to extremist content.

The senators also ask if Kaplan influenced Facebook’s puzzling decision to include The Daily Caller, the right-wing new site co-created by Tucker Carlson and linked to white supremacists, as a partner in its fact-checking program.

The senators’ final question includes a thinly-veiled threat to Section 230 of the Communications Decency Act, a law protecting platforms from legal liability for user generated content. Last month, President Trump launched his own attack against the vital legal shield, which makes internet businesses possible and also undergirds the modern social internet as we know it.

The letter from lawmakers come as Facebook faces a fresh wave of scrutiny around its platform policies from the #StopHateforProfit campaign. Launched by a group of civil rights organizations like the Anti-Defamation League, Color of Change and the NAACP, the Facebook advertising boycott has swelled to encompass a surprising array of huge mainstream brands including Coca-Cola, Best Buy, Ford and Verizon. Other brands on board include Adidas, Ben & Jerry’s, Reebok, REI, Patagonia and Vans.

While the unlikely mix of companies likely represents a similarly heterogenous mixture of motivations for temporarily suspending their Facebook ad spending, the initiative does make specific policy demands. On its webpage, the campaign advocates for some specific product changes, calling on Facebook to remove private groups centered on white supremacy and violent conspiracies, disable its recommendation engine for more hate and conspiracy groups and to hire a “C-suite level executive” who specializes in civil rights.



from Social – TechCrunch https://ift.tt/3dOtGde
via IFTTT

Facebook says it will prioritize original reporting and ‘transparent authorship’ in the News Feed

Facebook announced this morning that stories with original reporting will get a boost in the News Feed, while publications that don’t clearly credit their editorial staff will be demoted.

The change comes as a number of high-profile companies have said that they will pull their advertising from Facebook as part of the #StopHateforProfit campaign, organized by civil rights groups as a a way to pressure the social network to take stronger steps against hate speech and misinformation.

On Friday, CEO Mark Zuckerberg announced that the company will start labeling — but not removing — “newsworthy” content from politicians and other public figures that violates its content standards. (He also said that content threatening violence or suppressing voter participation will be removed even if it’s posted by a public figure.)

Today’s blog post from VP of Global News Partnerships Campbell Brown and Product Manager Jon Levin doesn’t mention the ad boycott, and it suggests that these changes were developed in consultation with news publishers and academics. But these certainly sound like concrete steps the company can point to as part of its efforts against misinformation.

What gets prioritized in the News Feed has long been a thorny issue for publishers, particularly after a major change in 2016 that prioritized content from friends over content from publishers.

“Most of the news stories people see in News Feed are from sources they or their friends follow, and that won’t change,” Brown and Levin wrote. “When multiple stories are shared by publishers and are available in a person’s News Feed, we will boost the more original one which will help it get more distribution.”

As for “transparent authorship,” Facebook will be looking for article bylines, or for a staff page on the publisher’s website. As Brown and Levin noted, “We’ve found that publishers who do not include this information often lack credibility to readers and produce content with clickbait or ad farms, all content people tell us they don’t want to see on Facebook.”

While these same like smart, straightforward changes (Google announced similar steps last fall), Brown and Levin also warned publishers not to expect “significant changes” in their Facebook traffic, since there are a “variety of signals” that go into how content gets ranked in the News Feed.

Also worth noting: These changes only apply to news content.



from Social – TechCrunch https://ift.tt/3iaI2Ix
via IFTTT

TikTok goes down in India, its biggest overseas market

A growing number of internet service providers in India have started to block their subscribers from accessing TikTok a day after New Delhi banned the popular short-video app and 58 other services in the world’s second largest internet market over security and privacy concerns.

Many users on Airtel, Vodafone and other service providers reported Tuesday afternoon (local time) that TikTok app on their phone was no longer accessible. Opening TikTok app, users said, showed they were no longer connected to the internet.

For many others, opening TikTok app promoted an error message that said the popular app was complying with the Indian government’s order and could no longer offer its service. Opening TikTok website in India prompts a similar message.

Earlier on Tuesday, TikTok app became unavailable for download on Apple’s App Store and Google Play Store in India. Two people familiar with the matter told TechCrunch that ByteDance, the developer of TikTok, had voluntarily pulled the app from the app stores.

The vast majority of other apps including Alibaba Group’s UC Browser and UC News as well as e-commerce service Club Factory that India blocked on Monday evening remain available for download on the marquee app stores, suggesting that Google and Apple are yet to comply with New Delhi’s direction.

TikTok, which has amassed over 200 million users in India, identifies Asia’s third-largest economy as its biggest overseas market. Nikhil Gandhi, who oversees TikTok’s operations in India, said the firm was “in the process” of complying with India’s order and was looking forward to engage with lawmakers in the nation to assuage their concerns.

This is the first time that India, the world’s second largest internet market with nearly half of its 1.3 billion population online, has ordered to ban so many foreign apps. New Delhi said nation’s Computer Emergency Response Team had received many “representations from citizens regarding security of data and breach of privacy impacting upon public order issues. […] The compilation of these data, its mining and profiling by elements hostile to national security and defence of India.”

The surprising announcement created confusion as to how the Indian government was planning to go about “blocking” these services in India. Things are becoming clearer now.

TikTok, which was blocked in India for a week last year but was accessible to users who had already installed the app on their smartphones, said last year in a court filing that it was losing more than $500,000 a day. Reuters reported on Tuesday that ByteDance had planned to invest $1 billion in India to expand the reach of TikTok, a plan that now appears derailed.

Zhao Lijian, a spokesperson for Chinese Foreign Ministry, told reporters in a briefing on Tuesday that “Indian government has a responsibility to uphold the legal rights of international investors including those from China.”



from Social – TechCrunch https://ift.tt/3ifwxj0
via IFTTT

Facebook launches Avatars, its Bitmoji competitor, in India

Facebook Avatars, which lets users customize a virtual lookalike of themselves for use as stickers in chat and comments, is now available in India, the social juggernaut’s biggest market by users account.

The American firm said Tuesday it had launched Avatars to India as more social interaction moves online amid a nationwide lockdown in the world’s second largest internet market. The company said Avatars supports a variety of faces, hairstyles, outfits that are customized for users in India.

Avatars’ launch comes to India at the height of a backlash against Chinese apps in the country — some of which have posed serious competition to Facebook’s ever-growing tentacles in Asia’s third-largest economy. On Monday evening, New Delhi ordered to ban TikTok and nearly 60 other apps developed by Chinese firms.

The social giant’s Avatars, a clone of Snapchat’s popular Bitmoji, was first unveiled last year. The feature, which Facebook sees as an expression tool, aims at turning engagements on the social service fun, youthful, visually communicative, and “more light-hearted.”

Users can create their avatar from the sticker tray in the comment section of a News Feed post or in Messenger. Facebook has expanded Avatars, initially available to users in Australia and New Zealand, to Europe and the U.S. in recent weeks.

Scores of companies including Chinese smartphone maker Xiaomi have attempted to replicate Bitmoji in recent years — though no one has expanded it like Snapchat.

Earlier this year, Snapchat introduced Bitmoji TV, a series of 4-minute comedy cartoons with users’ avatars. At the time, Snapchat said that about 70% of its daily active users, or 147 million of its 210 million users, had created their own Bitmojis.

Snapchat is preparing to launch the Spectacles, its AR glasses, in India. The California-headquartered firm has so far struggled to gain ground in India, where it had about 30 million monthly active users last month, according to mobile insights firm App Annie, data of which an industry executive shared with TechCrunch. Facebook has amassed over 350 million users in India and its instant messaging service WhatsApp has more than 400 million users in the country.



from Social – TechCrunch https://ift.tt/31GICbh
via IFTTT

YouTube bans David Duke, Richard Spencer and other white nationalist accounts

YouTube just took action against a collection of controversial figures synonymous with race-based hate, kicking six major channels off its platform for violating its rules.

The company deleted six channels on Monday: Richard Spencer‘s own channel and the affiliated channel for the National Policy Institute/Radix Journal, far right racist pseudo-science purveyor Stefan Molyneux, white supremacist outlet American Renaissance and affiliated channel AmRenPodcasts and white supremacist and former Ku Klux Klan leader David Duke.

“We have strict policies prohibiting hate speech on YouTube, and terminate any channel that repeatedly or egregiously violates those policies. After updating our guidelines to better address supremacist content, we saw a 5x spike in video removals and have terminated over 25,000 channels for violating our hate speech policies,” a YouTube spokesperson said in a statement provided to TechCrunch.

The company says that the channels it removed were repeat or “egregious” violators of the platform’s rules against leveraging YouTube videos to link to off-platform hate content and rules prohibiting users from making claims of inferiority about a protected group.

YouTube’s latest house-cleaning of far-right and white nationalist figures follows the suspension of Proud Boys founder Gavin McInnes earlier this month. Some of the newly-booted YouTube account owners turned to still-active Twitter accounts to complain about losing their YouTube channels Monday afternoon.

The same day that YouTube enforced its rules against a high-profile set of far-right accounts, both Twitch and Reddit took their own actions against content that violated their respective rules around hate. The Amazon-owned gaming streaming service suspended President Trump’s account Monday, citing comments made in two Trump rallies that aired there, one years-old and one from the campaign’s recent Tulsa rally. And after years of criticism for its failure to stem harassment and racism on, Reddit announced that it would purge 2,000 subreddits, including r/The_Donald, the infamously hate-filled forum founded as Trump announced his candidacy.



from Social – TechCrunch https://ift.tt/38an9IG
via IFTTT

Trump suspended from Twitch, as Reddit bans the ‘The_Donald’ and additional subreddits

Two big new pieces of news today from the ongoing battle between social media and politics. Both Twitch and Reddit have made moves against political content, citing violations of terms of service.

Twitch confirmed today that it has temporarily suspended the president’s account. “Hateful conduct is not allowed on Twitch,” a spokesperson for the streaming giant told TechCrunch. “In line with our policies, President Trump’s channel has been issued a temporary suspension from Twitch for comments made on stream, and the offending content has been removed.”

Twitch specifically cites two incidents from campaign rallies, uttered by Trump at rallies four years apart. The first comes from his campaign kickoff, including the now infamous words:

When Mexico sends its people, they’re not sending their best. They’re not sending you. They’re not sending you. They’re sending people that have lots of problems, and they’re bringing those problems with us. They’re bringing drugs. They’re bringing crime. They’re rapists. And some, I assume, are good people. But I speak to border guards and they tell us what we’re getting. And it only makes common sense. It only makes common sense. They’re sending us not the right people.

The second is from the recent rally in Tulsa, Oklahoma, his first since COVID-19-related shutdowns ground much of presidential campaigning to a halt. Here’s the pertinent bit from that:

Hey, it’s 1:00 o’clock in the morning and a very tough, I’ve used the word on occasion, hombre, a very tough hombre is breaking into the window of a young woman whose husband is away as a traveling salesman or whatever he may do. And you call 911 and they say, “I’m sorry, this number’s no longer working.” By the way, you have many cases like that, many, many, many. Whether it’s a young woman, an old woman, a young man or an old man and you’re sleeping.

Twitch tells TechCrunch that it offered the following guidance to Trump’s team when the channel was launched, “Like anyone else, politicians on Twitch must adhere to our Terms of Service and Community Guidelines. We do not make exceptions for political or newsworthy content, and will take action on content reported to us that violates our rules.”

That news follows the recent ban of the massive The_Donald subreddit, which sported more than 790,000 users, largely devoted to sharing content about Trump. Reddit confirmed the update to its policy that resulted in the ban, along with 2,000 other subreddits, including one devoted to the hugely popular leftist comedy podcast, Chapo Trap House.

The company cites the following new rules:

  • Rule 1 explicitly states that communities and users that promote hate based on identity or vulnerability will be banned.
    • There is an expanded definition of what constitutes a violation of this rule, along with specific examples, in our Help Center article.
  • Rule 2 ties together our previous rules on prohibited behavior with an ask to abide by community rules and post with authentic, personal interest.
    • Debate and creativity are welcome, but spam and malicious attempts to interfere with other communities are not.
  • The other rules are the same in spirit but have been rewritten for clarity and inclusiveness.

It adds:

All communities on Reddit must abide by our content policy in good faith. We banned r/The_Donald because it has not done so, despite every opportunity. The community has consistently hosted and upvoted more rule-breaking content than average (Rule 1), antagonized us and other communities (Rules 2 and 8), and its mods have refused to meet our most basic expectations. Until now, we’ve worked in good faith to help them preserve the community as a space for its users—through warnings, mod changes, quarantining, and more.

Reddit adds that it banned the smaller Chapo board for “consistently host[ing] rule-breaking content and their mods have demonstrated no intention of reining in their community.”

Trump in particular has found himself waging war on social media sites. After Twitter played whack-a-mole with problematic tweets around mail-in voting and other issues, he signed an executive order taking aim at Section 230 of the Communications Decency Act, which protects sites from being sued for content posted by users.



from Social – TechCrunch https://ift.tt/2Bok6k9
via IFTTT

Daily Crunch: Facebook faces an advertiser revolt

Facebook takes (small) steps to improve its content policies as advertisers join a broad boycott, founder Alexis Ohanian is leaving Initialized Capital and Waze gets a new look.

Here’s your Daily Crunch for June 29, 2020.

1. As advertisers revolt, Facebook commits to flagging ‘newsworthy’ political speech that violates policy

In a live-streamed segment of the company’s weekly all-hands meeting, CEO Mark Zuckerberg announced new measures to fight voter suppression and misinformation. At the heart of the policy changes is an admission that the company will continue to allow politicians and public figures to disseminate hate speech that does, in fact, violate Facebook’s own guidelines — but it will add a label to denote they’re remaining on the platform because of their “newsworthy” nature.

This announcement comes as advertiser momentum against the social network’s content and monetization policies continues to grow, with Unilever and Verizon (which owns TechCrunch) both committing to pull advertising from Facebook.

2. Alexis Ohanian is leaving Initialized Capital

Ohanian is leaving Initialized Capital to work on “a new project that will support a generation of founders in tech and beyond,” the firm said in a statement to TechCrunch. According to Axios, Ohanian is leaving Initialized to work more closely on pre-seed efforts.

3. Waze gets a big visual update with a focus on driver emotions

The new look is much more colorful, and also foregrounds the ability for individual drivers to share their current emotions with Moods, a set of user-selectable icons (with an initial group of 30) that can reflect how you’re feeling as you’re driving.

4. Amazon warehouse workers strike in Germany over COVID-19 conditions

Amazon warehouse workers in Germany are striking for 48 hours this week, to protest conditions that have led to COVID-19 infections among fellow employees. Strikes began today at six warehouses and are set to continue through the end of day Tuesday.

5. Four views: How will the work visa ban affect tech and which changes will last?

Normally, the government would process tens of thousands of visa applications and renewals in October at the start of its fiscal year, but President Trump’s executive order all but guarantees new visas won’t be granted until 2021. Four TechCrunch staffers analyzed the president’s move in an attempt to see what it portends for the tech industry, the U.S. economy and our national image. (Extra Crunch membership required.)

6. Apple began work on the Watch’s handwashing feature years before COVID-19

Unlike other rush initiatives undertaken by the company once the virus hit, however, the forthcoming Apple Watch handwashing app wasn’t built overnight. The feature was the result of “years of work,” VP of Technology Kevin Lynch told TechCrunch.

7. This week’s TechCrunch podcasts

The latest full-length Equity episode discusses new funding rounds for Away and Sonder, while the Monday news roundup has the latest on the Rothenberg VC Scandal. And Original Content has a review of the second season of “The Politician” on Netflix.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.



from Social – TechCrunch https://ift.tt/2CSCkuE
via IFTTT

Mobile developer Tru Luv enlists investors to help build a more inclusive alternative to gaming

Developer and programmer Brie Code has worked at the peak of the video game industry – she was responsible for many of the AI systems that powered non-player character (NPC) behavior in the extremely popular Assassin’s Creed series created by Ubisoft. It’s obvious that gaming isn’t for everyone, but Code became more and more interested in why that maxim seemed to play out along predictable gender lines, leading her ultimately to develop and launch #SelfCare through her own independent development studio TRU LUV.

#SelfCare went on to win accolades including a spot of Apple’s App Store Best of 2018 list, and Code and TRU LUV was also the first Canadian startup to attend Apple’s Entrepreneur Camp program. Now, with over 2 million downloads of #SelfCare (without any advertising at all), Code and TRU LUV have brought on a number of investors for their first outside funding including Real Ventures, Evolve Ventures, Bridge Builders Collaborative and Artesian Venture Partners.

I spoke to Code about how she came up with and created #SelfCare, what’s next for TRU LUV, and how the current COVID-19 crisis actually emphasizes the need for an alternative to gaming that serves many similar functions, but for a previously underserved groups of people for whom the challenges and rewards structures of traditional gaming just don’t prove very satisfying.

“I became very, very interested in why video games don’t interest about half of people, including all of my friends,” Code told me. “And at that point, tablets were becoming popular, and everyone had a phone. So if there was something universal about this medium, it should be being more widely adopted, yet I was seeing really clear patterns that it wasn’t. The last time I checked, which was maybe a couple years ago, there were 5 billion mobile users and around 2.2 billion mobile gamers.”

Her curiosity piqued by the discrepancy, especially as an industry insider herself, Code began to do her own research to figure out potential causes of the divide – the reason why games only seemed to consistently appeal to about half of the general computer user population, at best.

“I started doing a lot of focus groups and research and I saw really clear patterns, and I knew that if there is a clear pattern, there must be an explanation,” Code said. “What I discovered after I read Sheri Grainer Ray’s book Gender Inclusive Game Design, which she wrote in 2004, in a chapter on stimulation was how, and these are admittedly gross generalizations, but men tend to be stimulated by the sense of danger and things flashing on screen. And women, in her research, tended to be stimulated by something mentioned called a mutually-beneficial outcome to a socially significant situation. That’s when you help an NPC and they help you, for instance. In some way, that’s more significant, in the rules of the world than just the score going up.”

TRU LUV founder and CEO Brie Code

Code then dug in further, using consumer research and further study, and found a potential cause behind this divide that then provided a way forward for developing a new alternative to a traditional gaming paradigm that might prove more appealing to the large group of people who weren’t served by what the industry has traditionally produced.

“I started to read about the psychology of stimulation, and from there I was reading about the psychology of defense, and I found a very simple and clear explanation for this divide, which is that there are two human stress responses,” she said. “One of them, which is much more commonly known, is called the ‘fight-or-flight’ response. When we experience the fight-or-flight response, in the face of challenge or pressure or danger, you have adrenaline released in your body, and that makes you instinctively want to win. So what a game designer does is create these situations of challeng,e and then give you opportunities to win and that leverages the fight-or-flight response to stress: That’s the gamification curve. But there is another human stress response discovered at the UCLA Social Cognitive Neuroscience lab in 2000, By Dr. Shelly Taylor and her colleagues. It’s very prevalent, probably about half of stress responses that humans experience, and it’s called tend-and-befriend.”

Instead of generating an adrenaline surge, it releases oxytocin in the brain, and instead of seeking a victory over a rival, people who experience this want to take care of those who are more vulnerable, connect with friends and allies, and find mutually beneficial solutions to problems jointly faced. Seeking to generate that kind fo response led to what Code and TRU LUV call AI companions, a gaming alternative that is non-zero sum and based on the tend-and-befriend principal. Code’s background as an AI programmer working on some of the most sophisticated virtual character interactions available in modern games obviously came in handy here.

Code thought she might be on to something, but didn’t anticipate the level of #SelfCare’s success, which included 500,00 downloads in just six weeks, and more than 2 million today. And most of the feedback she received from users backed up her hypotheses about what the experience provided, and what users were looking for an an alternative to a mobile gaming experience.

Fast forward to now, and TRU LUV is growing its team, and focused on iterating and developing new products to capitalize on the clear vein of interest they’ve tapped among that underserved half of mobile users. Code and her team have brought on investors whose views and portfolios align with their product vision and company ethos, including Evolve Ventures which has backed a number of socially progressive ventures, and whose managing director Julius Mokrauer actually teaches a course on the subject at Columbia Business School.

#SelfCare was already showing a promising new path forward for mobile experience development before COVID-19 struck, but the product and TRU LUV are focused on “resilience and psychological development,” so it proved well-suited to a market in which mobile users were looking for ways to make sustained isolation more pleasant. Obviously we’re just at the beginning of feeling whatever impacts come out of the COVID-19 crisis, but it seems reasonable to expect that different kinds of mobile apps that trigger responses more aligned with personal well-being will be sought after.

Code says that COVID-19 hasn’t really changed TRU LUV’s vision or approach, but that it has led to the team moving more quickly on in-progress feature production, and on some parts of their roadmap, including building social features that allow players to connect with one another as well as with virtual companions.

“We want to move our production forward a bit faster than planned in order to respond to the need,” Code said.”Also we’re looking at being able to create social experiences a little bit earlier than planned, and also to attend to the need of people to be able to connect, above and beyond people who connect through video games.”



from Social – TechCrunch https://ift.tt/2BnEB0l
via IFTTT

Facebook expands its fan subscription program

Facebook is expanding the availability of the tools it offers to help game streamers and other online creators make money.

The social network first launched fan subscriptions in early 2018, giving a small group of creators in the United States and the United Kingdom the ability to charge their fans a $4.99 monthly fee for exclusive content and a fan badge for their profiles.

Participation in the subscription program was limited until today. In a blog post, Facebook now says that any creator in Australia, Brazil, Canada, Mexico, Thailand, United Kingdom and United States that meets the subscription eligibility criteria (having 10,000 followers or more than 250 return viewers, and either 50,000 post engagements or 180,000 watch minutes in the last 60 days, as well as abiding by Facebook’s general monetization policies) should be able to sign up to participate.

The company monetizes these subscriptions by taking up to 30% of subscription revenue. (It only collects revenue on subscribers acquired after January 1, 2020.)

Facebook subscriptions

Image Credits: Facebook

Facebook is also expanding the availability of Stars, a virtual currency that fans can use to tip their favorite creators. Creators in Australia, Canada, Columbia, India, Indonesia, Italy, Spain, Germany, France, Malaysia, Mexico, New Zealand, Peru, Philippines, Taiwan, Thailand, United Kingdom and the United States can now participate.

“We’re seeing the traditional notion of a creator evolve as comedians, artists, fitness instructors, athletes, small businesses and sports organizations use video and online events to connect with their audience,” wrote Product Marketing Director Yoav Arnstein, Product Marketing Director and Head of Creator & Publisher Experience Jeff Birkeland. “To better support our partners, we’re improving the tools that help creators earn money and manage their presence on Facebook.”

Beyond subscriptions and virtual currencies, the company says it’s giving creators new ways to make money through advertising, including image and post-roll ads in short-form videos (60 to 180 seconds), as well as ads in live videos.

Lastly, Facebook says it’s improving the Creator Studio tool with features like Comment Insights (which show how comments on posts can affect engagement and audience size) and the ability to log in using Instagram credentials.



from Social – TechCrunch https://ift.tt/3g14H8e
via IFTTT

Newzoo forecasts 2020 global games industry will reach $159 billion

Games and esports analytics firm Newzoo released its highly cited annual report on the size and state of the video gaming industry yesterday. The firm is predicting 2020 global game industry revenue from consumers of $159.3 billion, a 9.3% increase year-over-year. Newzoo predicts the market will surpass $200 billion by the end of 2023.

Importantly, the data excludes in-game advertising revenue (which surged +59% during COVID-19 lockdowns, according to Unity) and the market of gaming digital assets traded between consumers. Advertising within games is a meaningful source of revenue for many mobile gaming companies. In-game ads in just the U.S. drove roughly $3 billion in industry revenue last year, according to eMarketer.

To compare with gaming, the global markets for other media and entertainment formats are:

Counting gamers

Of 7.8 billion people on the planet, 4.2 billion (53.6%) of whom have internet connectivity, 2.69 billion will play video games this year, and Newzoo predicts that number to reach three billion in 2023. It broke down the current geographic distribution of gamers as:

  • 1,447 million (54%) in Asia-Pacific
  • 386 million (14%) in Europe
  • 377 million (14%) in Middle East & Africa
  • 266 million (10%) in Latin America
  • 210 million (8%) in North America


    from Social – TechCrunch https://ift.tt/3870e0N
    via IFTTT

As advertisers revolt, Facebook commits to flagging ‘newsworthy’ political speech that violates policy

As advertisers pull away from Facebook to protest the social networking giant’s hands-off approach to misinformation and hate speech, the company is instituting a number of stronger policies to woo them back.

In a livestreamed segment of the company’s weekly all-hands meeting, CEO Mark Zuckerberg recapped some of the steps Facebook is already taking, and announced new measures to fight voter suppression and misinformation — although they amount to things that other social media platforms like Twitter have already enacted and enforced in more aggressive ways.

At the heart of the policy changes is an admission that the company will continue to allow politicians and public figures to disseminate hate speech that does, in fact, violate the Facebook’s own guidelines — butit will add a label to denote they’re remaining on the platform because of their “newsworthy” nature.

It’s a watered down version of the more muscular stance that Twitter has taken to limit the ability of its network to amplify hate speech or statements that incite violence.

Zuckerberg said:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society — but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.

The problems with this approach are legion. Ultimately, it’s another example of Facebook’s insistence that with hate speech and other types of rhetoric and propaganda, the onus of responsibility is on the user.

Zuckerberg did emphasize that threats of violence or voter suppression are not allowed to be distributed on the platform whether or not they’re deemed newsworthy, adding that “there are no exceptions for politicians in any of the policies I’m announcing here today.”

But it remains to be seen how Facebook will define the nature of those threats — and balance that against the “newsworthiness” of the statement.

The steps around election year violence supplement other efforts that the company has taken to combat the spread of misinformation around voting rights on the platform.

 

The new measures that Zuckerberg announced also include partnerships with local election authorities to determine the accuracy of information and what is potentially dangerous. Zuckerberg also said that Facebook would ban posts that make false claims (like saying ICE agents will be checking immigration papers at polling places) or threats of voter interference (like “My friends and I will be doing our own monitoring of the polls”).

Facebook is also going to take additional steps to restrict hate speech in advertising.

“Specifically, we’re expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others,” Zuckerberg said. “We’re also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them.”

Zuckerberg’s remarks came days of advertisers— most recently Unilever and Verizon — announced that they’re going to pull their money from Facebook as part the #StopHateforProfit campaign organized by civil rights groups.

These some small, good steps from the head of a social network that has been recalcitrant in the face of criticism from all corners (except, until now. from the advertisers that matter most to Facebook). But they don’t do anything at about the teeming mass of misinformation that exists in the private channels that simmer below the surface of Facebook’s public facing messages, memes and commentary.



from Social – TechCrunch https://ift.tt/3i7KbET
via IFTTT

Unilever and Verizon are the latest companies to pull their advertising from Facebook

Advertiser momentum against Facebook’s content and moentization policies continues to grow.

Last night, Verizon (which owns TechCrunch) said it will be pausing advertising on Facebook and Instagram “until Facebook can create an acceptable solution that makes us comfortable and is consistent with what we’ve done with YouTube and other partners.”

Then today, it was joined by consumer goods giant Unilever, which said it will halt all U.S. advertising on Facebook, Instagram (owned by Facebook) and even Twitter, at least until the end of the year.

“Based on the current polarization and the election that we are having in the U.S., there needs to be much more enforcement in the area of hate speech,” Unilever’s executive vice president of global media Luis Di Como told the Wall Street Journal.

The effort to bring advertiser pressure to bear on Facebook began with a campaign called #StopHateforProfit, which is coordinated by the Anti-Defamation League, the NAACP, Color of Change, Free Press and Sleeping Giants. The campaign is calling for changes that are supposed to improve support for victims of racism, antisemitism and hate, and to end ad monetization on misinformation and hateful content.

The list of companies who have agreed to pull their advertising from Facebook also includes outdoor brands like REI, The North Face and Patagonia.

Facebook provided the following statement in response to Unilever’s announcement:

We invest billions of dollars each year to keep our community safe and continuously work with outside experts to review and update our policies. We’ve opened ourselves up to a civil rights audit, and we have banned 250 white supremacist organizations from Facebook and Instagram. The investments we have made in AI mean that we find nearly 90% of Hate Speech we action before users report it to us, while a recent EU report found Facebook assessed more hate speech reports in 24 hours than Twitter and YouTube. We know we have more work to do, and we’ll continue to work with civil rights groups, GARM, and other experts to develop even more tools, technology and policies to continue this fight.

And Twitter provided a statement from Sarah Personette, vice president of global client solutions:

Our mission is to serve the public conversation and ensure Twitter is a place where people can make human connections, seek and receive authentic and credible information, and express themselves freely and safely. We have developed policies and platform capabilities designed to protect and serve the public conversation, and as always, are committed to amplifying voices from underrepresented communities and marginalized groups. We are respectful of our partners’ decisions and will continue to work and communicate closely with them during this time.

As of 1:57pm Eastern, Facebook stock was down more than 7% from the start of trading. CEO Mark Zuckerberg said he will also be addressing these issues at a town hall starting at 2pm Eastern today. (So … now.)

 



from Social – TechCrunch https://ift.tt/3exANbg
via IFTTT

Running a queer dating startup amid a pandemic and racial justice uprising

The events of the past few months have shaken the lives of everyone, but especially Black people in the U.S. COVID-19 has disproportionately impacted members of the Black community while police violence has recently claimed the lives of George Floyd, Tony McDade, Breonna Taylor, Rayshard Brooks and others. 

Two weeks ago, two Black transgender women, Riah Milton and Dominique “Rem’mie” Fells were murdered. In light of their deaths, activists took to the streets to protest the violence Black trans women face. Two days after Floyd’s killing, McDade, a Black trans man was shot and killed by police in Tallahassee, Florida. 

In light of Pride month coinciding with one of the biggest racial justice movements of the century amid a pandemic, TechCrunch caught up with Robyn Exton, founder of queer dating app Her, to see how her company is navigating this unprecedented moment. 

Exton and I had a wide-ranging conversation including navigating COVID-19 as a dating startup, how sheltering in place has affected product development, shifting the focus of what is historically a month centered around LGBTQ people to include racial justice work and putting purpose back into Pride month.

“Pride exists because there is inequality within our world and within our community and still there is no clear focus on what it is we should be fighting for as a community,” Exton says. “It almost feels like since equal marriage was passed, there’s a range of topics but no clear voice saying this is what everyone should focus on right now. And then obviously everything changed after George Floyd’s murder. Over the course of the following weekend, we canceled pretty much everything that was going out that talked still about Pride as a celebration. Especially for Black people within our community, in that moment of so much trauma, it felt completely wrong to talk about Pride just in general.”

Worldwide, Pride events have been canceled as a result of the pandemic. But it gives people and corporations time to reflect on what kind of presence they want to have in next year’s Pride celebrations.



from Social – TechCrunch https://ift.tt/3eDeEsa
via IFTTT

Facebook will show users a pop-up warning before they share an outdated story

Facebook announced Thursday that it would introduce a notification screen warning users if they try to share content that’s more than 90 days old. They’ll be given the choice to “go back” or to click through if they’d still like to share the story knowing that it isn’t fresh.

Facebook acknowledged that old stories shared out of their original context play a role in spreading misinformation. The social media company said “news publishers in particular” have expressed concern about old stories being recirculated as though they’re breaking news.

“Over the past several months, our internal research found that the timeliness of an article is an important piece of context that helps people decide what to read, trust and share,” Facebook Vice President of Feed and Stories John Hegeman wrote on the company’s blog.

Image Credits: Facebook

The notification screen is an outgrowth of other kinds of notifications the company has experimented with recently. Last year, Instagram introduced a pop-up notification to discourage its users from sharing offensive or abusive comments with a similar set of options, allowing them to click through or go back. The company said that its initial results with the experiment showed promise in shaping users toward better behavior.

In a blog post announcing the new feature, Facebook said that it is now considering other kinds of notification screens to reduce misinformation, including pop-ups for posts about COVID-19 that would provide context about source links and steer users toward public health resources.



from Social – TechCrunch https://ift.tt/3803ZVX
via IFTTT

Privacy not a blocker for “meaningful” research access to platform data, says report

European lawmakers are eyeing binding transparency requirements for Internet platforms in a Digital Services Act (DSA) due to be drafted by the end of the year. But the question of how to create governance structures that provide regulators and researchers with meaningful access to data so platforms can be held accountable for the content they’re amplifying is a complex one.

Platforms’ own efforts to open up their data troves to outside eyes have been chequered to say the least. Back in 2018, Facebook announced the Social Science One initiative, saying it would provide a select group of academics with access to about a petabyte’s worth of sharing data and metadata. But it took almost two years before researchers got access to any data.

“This was the most frustrating thing I’ve been involved in, in my life,” one of the involved researchers told Protocol earlier this year, after spending some 20 months negotiating with Facebook over exactly what it would release.

Facebook’s political Ad Archive API has similarly frustrated researchers. “Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing),” said Mozilla last year, accusing the tech giant of transparency-washing.

Facebook, meanwhile, points to European data protection regulations and privacy requirements attached to its business following interventions by the US’ FTC to justify painstaking progress around data access. But critics argue this is just a cynical shield against transparency and accountability. Plus of course none of these regulations stopped Facebook grabbing people’s data in the first place.

In January, Europe’s lead data protection regulator penned a preliminary opinion on data protection and research which warned against such shielding.

“Data protection obligations should not be misappropriated as a means for powerful players to escape transparency and accountability,” wrote EDPS Wojciech Wiewiorówski. “Researchers operating within ethical governance frameworks should therefore be able to access necessary API and other data, with a valid legal basis and subject to the principle of proportionality and appropriate safeguards.”

Nor is Facebook the sole offender here, of course. Google brands itself a ‘privacy champion’ on account of how tight a grip it keeps on access to user data, heavily mediating data it releases in areas where it claims ‘transparency’. While, for years, Twitter routinely disparaged third party studies which sought to understand how content flows across its platform — saying its API didn’t provide full access to all platform data and metadata so the research couldn’t show the full picture. Another convenient shield to eschew accountability.

More recently the company has made some encouraging noises to researchers, updating its dev policy to clarify rules, and offering up a COVID-related dataset — though the included tweets remains self selected. So Twitter’s mediating hand remains on the research tiller.

A new report by AlgorithmWatch seeks to grapple with the knotty problem of platforms evading accountability by mediating data access — suggesting some concrete steps to deliver transparency and bolster research, including by taking inspiration from how access to medical data is mediated, among other discussed governance structures.

The goal: “Meaningful” research access to platform data. (Or as the report title puts it: Operationalizing Research Access in Platform Governance: What to Learn from Other Industries?

“We have strict transparency rules to enable accountability and the public good in so many other sectors (food, transportation, consumer goods, finance, etc). We definitely need it for online platforms — especially in COVID-19 times, where we’re even more dependent on them for work, education, social interaction, news and media consumption,” co-author Jef Ausloos tells TechCrunch.

The report, which the authors are aiming at European Commission lawmakers as they ponder how to shape an effective platform governance framework, proposes mandatory data sharing frameworks with an independent EU-institution acting as an intermediary between disclosing corporations and data recipients.

It’s not the first time an online regulator has been mooted, of course — but the entity being suggested here is more tightly configured in terms of purpose than some of the other Internet overseers being proposed in Europe.

“Such an institution would maintain relevant access infrastructures including virtual secure operating environments, public databases, websites and forums. It would also play an important role in verifying and pre-processing corporate data in order to ensure it is suitable for disclosure,” they write in a report summary.

Discussing the approach further, Ausloos argues it’s important to move away from “binary thinking” to break the current ‘data access’ trust deadlock. “Rather than this binary thinking of disclosure vs opaqueness/obfuscation, we need a more nuanced and layered approach with varying degrees of data access/transparency,” he says. “Such a layered approach can hinge on types of actors requesting data, and their purposes.”

A market research purpose might only get access to very high level data, he suggests. Whereas medical research by academic institutions could be given more granular access — subject, of course, to strict requirements (such as a research plan, ethical board review approval and so on).

“An independent institution intermediating might be vital in order to facilitate this and generate the necessary trust. We think it is vital that that regulator’s mandate is detached from specific policy agendas,” says Ausloos. “It should be focused on being a transparency/disclosure facilitator — creating the necessary technical and legal environment for data exchange. This can then be used by media/competition/data protection/etc authorities for their potential enforcement actions.”

Ausloos says many discussions on setting up an independent regulator for online platforms have proposed too many mandates or competencies — making it impossible to achieve political consensus. Whereas a leaner entity with a narrow transparency/disclosure remit should be able to cut through noisy objections, is the theory.

The infamous example of Cambridge Analytica does certainly loom large over the ‘data for research’ space — aka, the disgraced data company which paid a Cambridge University academic to use an app to harvest and process Facebook user data for political ad targeting. And Facebook has thought nothing of turning this massive platform data misuse scandal into a stick to beat back regulatory proposals aiming to crack open its data troves.

But Cambridge Analytica was a direct consequence of a lack of transparency, accountability and platform oversight. It was also, of course, a massive ethical failure — given that consent for political targeting was not sought from people whose data was acquired. So it doesn’t seem a good argument against regulating access to platform data. On the contrary.

With such ‘blunt instrument’ tech talking points being lobbied into the governance debate by self-interested platform giants, the AlgorithmWatch report brings both welcome nuance and solid suggestions on how to create effective governance structures for modern data giants.

On the layered access point, the report suggests the most granular access to platform data would be the most highly controlled, along the lines of a medical data model. “Granular access can also only be enabled within a closed virtual environment, controlled by an independent body — as is currently done by Findata [Finland’s medical data institution],” notes Ausloos.

Another governance structure discussed in the report — as a case study from which to draw learnings on how to incentivize transparency and thereby enable accountability — is the European Pollutant Release and Transfer Register (E-PRTR). This regulates pollutant emissions reporting across the EU, and results in emissions data being freely available to the public via a dedicated web-platform and as a standalone dataset.

“Credibility is achieved by assuring that the reported data is authentic, transparent and reliable and comparable, because of consistent reporting. Operators are advised to use the best available reporting techniques to achieve these standards of completeness, consistency and credibility,” the report says on the E-PRTR.

“Through this form of transparency, the E-PRTR aims to impose accountability on operators of industrial facilities in Europe towards to the public, NGOs, scientists, politicians, governments and supervisory authorities.”

While EU lawmakers have signalled an intent to place legally binding transparency requirements on platforms — at least in some less contentious areas, such as illegal hate speech, as a means of obtaining accountability on some specific content problems — they have simultaneously set out a sweeping plan to fire up Europe’s digital economy by boosting the reuse of (non-personal) data.

Leveraging industrial data to support R&D and innovation is a key plank of the Commission’s tech-fuelled policy priorities for the next five+ years, as part of an ambitious digital transformation agenda.

This suggests that any regional move to open up platform data is likely to go beyond accountability — given EU lawmakers are pushing for the broader goal of creating a foundational digital support structure to enable research through data reuse. So if privacy-respecting data sharing frameworks can be baked in, a platform governance structure that’s designed to enable regulated data exchange almost by default starts to look very possible within the European context.

“Enabling accountability is important, which we tackle in the pollution case study; but enabling research is at least as important,” argues Ausloos, who does postdoc research at the University of Amsterdam’s Institute for Information Law. “Especially considering these platforms constitute the infrastructure of modern society, we need data disclosure to understand society.”

“When we think about what transparency measures should look like for the DSA we don’t need to reinvent the wheel,” adds Mackenzie Nelson, project lead for AlgorithmWatch’s Governing Platforms Project, in a statement. “The report provides concrete recommendations for how the Commission can design frameworks that safeguard user privacy while still enabling critical research access to dominant platforms’ data.”

You can read the full report here.



from Social – TechCrunch https://ift.tt/2BA1kWS
via IFTTT