Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.
Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.
Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.
Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.
Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.
Mark Zuckerberg announced Trump’s suspension on January 7, after the then-president of the United States incited his followers to riot at the nation’s Capitol, an event that resulted in a number of deaths and imperiled the peaceful transition of power.
The board says the point of the public comment process is to incorporate “diverse perspectives” from third parties who wish to share research that might inform their decisions, though it seems a lot more likely the board will wind up with a tidal wave of subjective and probably not particularly useful political takes. Nonetheless, the comment process will be open for 10 days and comments will be collected in an appendix for each case. The board will issue a decision on Trump’s Facebook fate within 90 days of January 21, though the verdict could come sooner.
The Oversight Board specifically invites public comments that consider:
Whether Facebook’s decision to suspend President Trump’s accounts for an indefinite period complied with the company’s responsibilities to respect freedom of expression and human rights, if alternative measures should have been taken, and what measures should be taken for these accounts going forward.
How Facebook should assess off-Facebook context in enforcing its Community Standards, particularly where Facebook seeks to determine whether content may incite violence.
How Facebook should treat the expression of political candidates, office holders, and former office holders, considering their varying positions of power, the importance of political opposition, and the public’s right to information.
The accessibility of Facebook’s rules for account-level enforcement (e.g. disabling accounts or account functions) and appeals against that enforcement.
Considerations for the consistent global enforcement of Facebook’s content policies against political leaders, whether at the content-level (e.g. content removal) or account-level (e.g. disabling account functions), including the relevance of Facebook’s “newsworthiness” exemption and Facebook’s human rights responsibilities.
The Oversight Board’s post gets very granular on the Trump suspension, critiquing Facebook for lack of specificity when the company didn’t state exactly which part of its community standards were violated. Between this and the five recent cases, the board appears to view its role as a technical one, in which it examines each case against Facebook’s existing ruleset and then makes recommendations for future policy rather than working backward from its own broader recommendations.
The Facebook Oversight Board announced its first cluster of decisions this week, overturning the company’s own choice to remove potentially objectionable content in four of five cases. None of those cases pertained to content relevant to Trump’s account suspension, but they prove that the Oversight Board isn’t afraid to go against the company’s own thinking — at least when it comes to what gets taken down.
French startup Chefclub announced earlier this week that it has raised a $17 million funding round led by First Bridge Ventures. SEB Alliance, the venture arm of kitchen appliance maker Groupe SEB, Korelya Capital and Algaé Ventures are also participating.
Chefclub has been building a major media brand on social media platforms. It has attracted a huge audience that doesn’t look bad next to well-funded media brands Tastemade and Tasty.
I already covered the company at length, so I encourage you to read my previous profile of the company:
Chefclub is an interesting lesson in sales funnel. It has a huge top of the funnel with 100 million followers YouTube, Snapchat, Instagram and TikTok. Overall, they generate over 1 billion views per month.
The company leverages that audience to create new products. It starts with cooking books, obviously. Chefclub has sold 700,000 books so far. As those books are self-published, the company gets to keep a good chunk of the revenue.
More recently, the startup has launched cooking kits for kids with colorful measuring cups, cooking accessories and easy-to-understand recipes. 150,000 people have bought a product for children.
Chefclub now wants to display its brands in stores thanks to partnerships. That’s why having Groupe SEB as an investor makes sense. You can imagine co-branded items boosted by promotion on Chefclub’s accounts.
Finally, the startup plans to enter a new market — consumer-packaged goods. That’s the same thinking behind it, except that we’re talking about food. It’s interesting to see that Chefclub doesn’t think online ads represent the future of the company. And it seems like a smart decision during the current economic crisis.
from Social – TechCrunch https://ift.tt/2YoajT0
via IFTTT
An early Snapchat employee who once architected the “Our Stories” product, Chloë Drimal, has now launched her own social app, Yoni Circle. Described as a membership-based community, the app aims to connect womxn using storytelling — including through both live video chat sessions as well as with pre-recorded stories that are available at any time.
The company has been quietly operating in beta since April 2020, but is now making its public launch.
Drimal came up with the idea for a social storytelling app, in part, because she saw the potential when working on the Snapchat “Our Stories” product.
Image Credits: Yoni Circle; founder Chloë Drimal
“I got to see that storytelling connects us,” she explains. “I got to peer into global experiences like New Year’s Eve or witnessing the Hajj pilgrimage to Mecca, and I just saw firsthand how connected we are as people,” Drimal continues. “I got to see how that was affecting our Snapchat users and making them feel more connected to the world because of this art of storytelling,” she adds.
But another inspiration came from Drimal’s personal experience in being taken off the “Our Stories” product to work on other projects at Snap — a difficult time in her career that started to make her feel very alone. She later ended up having conversations with other women — often older women who shared their own experiences — who helped her realized that she wasn’t as alone as she first thought.
“Their stories empowered me to write my next chapter, and know that this wasn’t the end of my career as I dramatically thought as a twenty-five or twenty-four year-old. It really was just the beginning and it helped me see the healing of storytelling — but also the importance of what strangers being vulnerable can do,” she says.
After leaving Snap, where she had later run women’s initiatives, Drimal began hosting an in-person community focused around more structured storytelling circles. The community evolved to become what’s now the Yoni Circle app, whose beta version was built with help from former Snap engineer Akiva Bamberger, now a Yoni Circle advisor.
Image Credits: Yoni Circle
Today, the app has two main features: the interactive Storytelling Circles component and the more passive Yoni Radio.
The former allows members to join 60-minute moderated live video chat sessions with up to six womxn who connect with one another by listening to each others’ stories. During the Circle, a trained “Salonniere” guide will first lead the group through introductions, a breathing exercise, and will then introduce a storytelling prompt based on a specific theme, like “Stories on Gratitude,” or “Stories on Surprise,” for example.
The Salonnieres are not volunteers, but rather paid contractors who have undergone specific training to lead these sorts of sessions. Over time, they’ll also be able to gather members to paid web-based events, which could be things like yoga classes, book clubs, cooking classes and more.
Image Credits: Yoni Circle
The Circle sessions have a basic rule: take the stories with you, and leave the names behind. In other words, what’s shared in circles is meant to remain confidential, unless the member chooses to share it publicly. Anyone violating that rule will be banned.
Members are also advised to speak simply, leave their egos at the door, and respect differences. No one receives the topic beforehand, either, so members can’t rehearse their speeches and put on a “performance.” The act of participating is meant to be about authenticity and vulnerability.
During the session, each participant takes their turn to share their own story and will listen to the others’ in return. Users only speak when they have the “talking piece,” and they can react to another story with snaps, or by clicking a snap icon.
While the sessions may uplift members the way that group therapy does, they’re not really focused on addressing psychological issues. Instead, Drimal says members compare them to “a slumber party combined with a mindfulness class.”
Still, she says, members feel like participating is an act of self-care.
“You just feel lighter,” Drimal explains. “It’s hard not to listen to other stories, to see yourself and just be reminded that you aren’t alone in the highs and lows of life.”
Image Credits: Yoni Circle
Members can also opt to record their own stories and then set them as either public or private on their Yoni Circle profile. The team then curates the public stories to share as highlights on the app’s homepage, allowing users to listen at any time. This also powers the Yoni Radio feature.
Recently, the company had been testing a weekly broadcast of these recorded stories, but will soon trial a new “story of the day” feature instead.
The Yoni Circle app first launched into beta last April, just as the COVID-19 pandemic in the U.S. had begun. That led to people isolating themselves at home away from friends, extended family, and other social interactions — driving demand for new social experiences.
But Yoni Circle doesn’t quite fit into the new live, interactive mobile market that’s developed as of late, led by apps like Clubhouse and Twitter Spaces.
“I like to think we’ve carved out something different,” says Drimal. “It is intimate because we’re creating a safe space to be vulnerable…the things that I share in any circle I would never share on Clubhouse,” she says. “I think that’s also why we’ve been so focused on the way we grow our community. Yes, we’re looking to have millions of members, but we need to get there carefully.”
Currently, Yoni Circle is open to people who identify as womxn, and it involves an application process where you have to share who you are and what you’re looking to gain from the experience. Longer-term, the goal is to evolve the platform into a safe space that’s open to all.
Though the pandemic helped generate initial interest in the app — it now has members from 1,000 cities across 80 countries — the startup sees a future in the post-pandemic market with in-person events that further connect its members.
Yoni Circle today is available on iOS for free. It will later monetize through an Audible-like credits model which provides access to the Circle sessions.
Three of the popular retail stock market trading apps that have hosted much of the activity related to the Wall Street Bets subreddit-spurred run on stocks including GameStop (GME) and AMC, among others, have removed all restrictions on their exchange by their users. M1, Webull and Public had restricted transactions for the affected stocks earlier in the day, along with Robinhood.
M1, Webull and Public all attributed the restrictions placed on these volatile stocks not to any effort to curb their purchase or sale, but instead cited the costs associated with settling the trades on the part of their clearing firm, Apex. All three platforms employ Apex to clear trades made by users via their platform.
In an interview with Webull CEO Anthony Denier, Yahoo Finance confirmed that the restriction was not something the company had any hand in deciding.
NEW: The CEO of Webull tell us the decision to join Robinhood in restricting AMC and GameStop trades came from soaring costs to settle its users trades:
"It wasn't our choice … this has to do with settlement mechanics in the market."pic.twitter.com/Micz5U6SRc
Public confirmed via Twitter that users can now buy and sell $GME and $AMC and $KOSS on the platform, thanks to the resolution of the Apex blocker. Meanwhile Webull noted that all three stocks are now also available for exchange via their app, as did M1 shortly after.
We're back.
Our clearing firm, Apex, has resumed the ability to buy $GME, $AMC, and $KOSS on Public. We appreciate their cooperation and are grateful to our members for their patience and understanding.
Robinhood earlier issued a blog post noting that it is restricting a number of stocks tied to the r/WallStreetBets action to counter short-seller hedge funds, arguing that it’s doing so in the best interest of users. This has not seemed to have been much appreciated by most users, based on the reaction on social media to that action thus far. Robinhood at no time references any technical barriers imposed by any clearing house.
Twitter only announced its acquisition of newsletter platform Revue two days ago, but the company has already begun to integrate the product into the Twitter.com website. It appears “Newsletters” will soon be the newest addition to Twitter’s sidebar navigation, alongside Bookmarks, Moments, Twitter Ads, and other options. The company is also readying a way to promote the new product to Twitter users, promising them another way to reach their audience while getting paid for their work.
These findings and others were uncovered by noted reverse engineer Jane Manchun Wong, who dug into the Twitter.com website to see what the company may have in store for its newest acquisition.
According to a pop-up promotional message in development she found, Twitter will soon be pitching a handful of Revue benefits, like the ability to compose and schedule newsletters, embed tweets, import email lists, analyze engagement and earn money from paid followers. The messaging was clearly in early testing (it even had a typo!), but it hints at Twitter’s larger plans to tie Revue into the Twitter platform and serve as a way for prominent users to essentially monetize their reach.
Currently, the “Find Out More” button on pop-up message will redirect Twitter users to the Revue website. Wong found.
In addition, Wong noted Twitter was making “Newsletters” a new navigation option on the Twitter sidebar menu. Unfortunately, it was not shown on the top-level menu where you today find options like Explore, Notifications, Messages or Bookmarks, but rather on the sub-menu you access from the three-dot “More” link.
Twitter is working to include the “ Newsletters” item in the menu in the web app, which shows the popup about @revue above pic.twitter.com/ATaXDGr0zc
The tight integration between Revue and Twitter’s main platform could potentially give the company an interesting competitive advantage in the newsletters market — especially as Twitter has already dropped hints that its new audio product, Twitter Spaces, will also be used as a way to connect with newsletter subscribers.
In its announcement, Twitter referred to “new settings for writers to host conversations” with their readers. That likely means Twitter users would be able to not just publish newsletters with the new Twitter product, but also monetize their existing follower base, find new readers through Twitter’s built-in features, and then engage their fans on an ongoing basis through audio chats in Spaces. Combined with its lowering of the paid newsletter fee to 5%, many authors are rightly considering the potential Twitter advantages. If anything at all is holding them back, it’s Twitter’s less-than-stellar reputation when it comes to successfully capitalizing on some of its acquisitions.
Twitter declined to comment on Wong’s findings, but we understand these features are currently not live on the website. Wong told us she hasn’t found any indications of Revue integrations in the Twitter mobile apps just yet.
from Social – TechCrunch https://ift.tt/2NGpmW9
via IFTTT
Facebook’s self-regulatory ‘Oversight Board’ (FOB) has delivered its first batch of decisions on contested content moderation decisions almost two months after picking its first cases.
A long time in the making, the FOB is part of Facebook’s crisis PR push to distance its business from the impact of controversial content moderation decisions — by creating a review body to handle a tiny fraction of the complaints its content takedowns attract. It started accepting submissions for review in October 2020 — and has faced criticism for being slow to get off the ground.
Announcing the first decisions today, the FOB reveals it has chosen to uphold just one of the content moderation decisions made earlier by Facebook, overturning four of the tech giant’s decisions.
Decisions on the cases were made by five-member panels that contained at least one member from the region in question and a mix of genders, per the FOB. A majority of the full Board then had to review each panel’s findings to approve the decision before it could be issued.
The sole case where the Board has upheld Facebook’s decision to remove content is case 2020-003-FB-UA — where Facebook had removed a post under its Community Standard on Hate Speech which had used the Russian word “тазики” (“taziks”) to describe Azerbaijanis, who the user claimed have no history compared to Armenians.
In the four other cases the Board has overturned Facebook takedowns, rejecting earlier assessments made by the tech giant in relation to policies on hate speech, adult nudity, dangerous individuals/organizations, and violence and incitement. (You can read the outline of these cases on its website.)
Each decision relates to a specific piece of content but the board has also issued nine policy recommendations.
These include suggestions that Facebook [emphasis ours]:
Create a new Community Standard on health misinformation, consolidating and clarifying the existing rules in one place. This should define key terms such as “misinformation.”
Adopt less intrusive means of enforcing its health misinformation policies where the content does not reach Facebook’s threshold of imminent physical harm.
Increase transparencyaround how it moderates health misinformation, including publishing a transparency report on how the Community Standards have been enforced during the COVID-19 pandemic. This recommendation draws upon the public comments the Board received.
Ensure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule Facebook is enforcing. (The Board made two identical policy recommendations on this front related to the cases it considered, also noting in relation to the second hate speech case that “Facebook’s lack of transparency left its decision open to the mistaken belief that the company removed the content because the user expressed a view it disagreed with”.)
Explain and provide examples of the application of key terms from the Dangerous Individuals and Organizations policy, including the meanings of “praise,” “support” and “representation.” The Community Standard should also better advise users on how to make their intent clear when discussing dangerous individuals or organizations.
Provide a public list of the organizations and individuals designated as ‘dangerous’ under the Dangerous Individuals and Organizations Community Standard or, at the very least, a list of examples.
Inform users when automated enforcement is used to moderate their content, ensure that users can appeal automated decisions to a human being in certain cases, and improve automated detection of images with text-overlay so that posts raising awareness of breast cancer symptoms are not wrongly flagged for review. Facebook should also improve its transparency reporting on its use of automated enforcement.
Revise Instagram’s Community Guidelines to specify that female nipples can be shown to raise breast cancer awareness and clarify that where there are inconsistencies between Instagram’s Community Guidelines and Facebook’s Community Standards, the latter take precedence.
Where it has overturned Facebook takedowns the board says it expects Facebook to restore the specific pieces of removed content within seven days.
In addition, the Board writes that Facebook will also “examine whether identical content with parallel context associated with the Board’s decisions should remain on its platform”. And says Facebook has 30 days to publicly respond to its policy recommendations.
So it will certainly be interesting to see how the tech giant responds to the laundry list of proposed policy tweaks — perhaps especially the recommendations for increased transparency (including the suggestion it inform users when content has been removed solely by its AIs) — and whether Facebook is happy to align entirely with the policy guidance issued by the self-regulatory vehicle (or not).
Facebook created the board’s structure and charter and appointed its members — but has encouraged the notion it’s ‘independent’ from Facebook, even though it also funds FOB (indirectly, via a foundation it set up to administer the body).
And while the Board claims its review decisions are binding on Facebook there is no such requirement for Facebook to follow its policy recommendations.
It’s also notable that the FOB’s review efforts are entirely focused on takedowns — rather than on things Facebook chooses to host on its platform.
Given all that it’s impossible to quantify how much influence Facebook exerts on the Facebook Oversight Board’s decisions. And even if Facebook swallows all the aforementioned policy recommendations — or more likely puts out a PR line welcoming the FOB’s ‘thoughtful’ contributions to a ‘complex area’ and says it will ‘take them into account as it moves forward’ — it’s doing so from a place where it has retained maximum control of content review by defining, shaping and funding the ‘oversight’ involved.
tl;dr: An actual supreme court this is not.
In the coming weeks, the FOB will likely be most closely watched over a case it accepted recently — related to the Facebook’s indefinite suspension of former US president Donald Trump, after he incited a violent assault on the US capital earlier this month.
The board notes that it will be opening public comment on that case “shortly”.
“Recent events in the United States and around the world have highlighted the enormous impact that content decisions taken by internet services have on human rights and free expression,” it writes, going on to add that: “The challenges and limitations of the existing approaches to moderating content draw attention to the value of independent oversight of the most consequential decisions by companies such as Facebook.”
But of course this ‘Oversight Board’ is unable to be entirely independent of its founder, Facebook.
WhatsApp, the popular messaging app with more than 2 billion users, has been getting a lot of heat and losing users in recent weeks after announcing (and then delaying) changes to how it shares data with its owner Facebook. And it’s not done with how it’s tweaking privacy and security. Now, it’s adding a new biometric feature to the service to bring in a new authentication layer for those using its web and desktop versions.
The company said that from today, it will let people add in a fingerprint, face, or iris scan when to use WhatsApp on desktop or web.
The feature is coming as part of a new look for the desktop versions, ahead of what the company hints will be more updates coming soon.
With the new feature, you will now have the option (not requirement) to add in a biometric login, which uses either a fingerprint, face ID, or iris ID — depending on the device — on Android or iPhone handsets, to add in a second layer of authentication.
When implemented, it will appear for users before a desktop or web version can be linked up with a mobile app account, which today relies just on using a QR code: the QR code doesn’t go away; this is a second step users will need to take, similar to how you can choose to implement two steps of authentication on a handset to use the WhatsApp mobile app today.
WhatsApp says that on iPhone, it will work with all devices operating iOS 14 and above with Touch ID or Face ID, while on Android, it will work on any device compatible with Biometric Authentication (Face Unlock, Fingerprint Unlock or Iris Unlock).
The service is another step forward in WhatsApp creating more feature parity between its flagship mobile apps, and how you interact with the service when you use it elsewhere.
Mobile still accounts for the majority of WhatsApp’s users, but events like global health pandemics, which are keeping more of us inside, are likely leading to a surge of users of its Web and native desktop apps, and so it makes sense for it to be adding more features there.
WhatsApp told TechCrunch that it is going to be adding in more features this year to bring the functionality of the two closer together. There are still big gaps: for example, you can’t make calls on the WhatsApp web version. (That feature may be one coming soon: as of last month, it started to get spotted in beta tests.)
To be extra clear, the biometric service, which is being turned on globally, will be opt-in. Users will need to go to their settings to turn on the feature, in the same way that today they need to go into their settings to turn on biometric authentication for their mobile apps.
What comes next for biometrics?
WhatsApp’s recent announcements about data-sharing changes between it and Facebook have put a lot of people on edge about the company’s intentions. And that’s no surprise. It’s a particularly sensitive issue since messaging has been thought of a very personal and sometimes private space, seen as separate from what people do on more open social networking platforms.
Over the years, however, that view has been eroded through data leaks, group messaging abuse, and (yes) changes in privacy terms.
That means there will likely be a lot of people who will doubt what Facebook’s intentions are here, too.
WhatsApp is pretty clear in outlining that it’s not able to access the biometric information that you will be storing in your device, and that it is using the same standard biometric authentication APIs that other secure apps, like banking apps, use.
But the banking app parallel is notable here, and maybe one worth thinking about more. Consider how the company has been adding a lot more features and functionality into WhatsApp, including the ability to pay for goods and services, and in markets like India, tests to offer insurance and pension products.
Yes, this new biometric feature is being rolled out today to create a more secure way for people to link up apps across devices. But in the interest of that feature parity, in future, it will be interesting to see how and if biometrics might appear as those other features get rolled out beyond mobile, too.
from Social – TechCrunch https://ift.tt/3chYyWx
via IFTTT
TikTok has a vaping problem. Although a 2019 U.S. law made it illegal to sell or market e-cigarettes to anyone under the age of 21, TikTok videos featuring top brands of disposable e-cigarettes and vapes for sale have been relatively easy to find on the app. These videos, set to popular and upbeat music, clearly target a teenage customer base with offers of now-unauthorized cartridge flavors like fruit and mint in the form of a disposable vape. Some sellers even promote their “discreet” packaging services, where the vapes they ship to customers can be hidden from parents’ prying eyes by being placed under the package’s stuffing or tucked inside other products, like makeup bags or fuzzy slippers.
In February 2020, the FDA first began to take enforcement action against illegally marketed e-cigarette devices, including those offering flavors besides tobacco or menthol, as well as those targeted towards minors — an action that was designed to target Juul.
As a result, disposable vapes like Puff Bar were adopted by some young people who were still in search of flavors like bubblegum, peach, strawberry and others. These cheaper disposables were easy to find, and continued to be available at convenience stores and gas stations.
But they’re also all over TikTok, ready to be shipped with anyone with a way to pay.
What’s more, when this content is reported to TikTok, it’s not always taken down.
TechCrunch found vape sellers marketing on TikTok who have been using the app to communicate with customers through both videos and comments. They also direct viewers to what appear to be illegally operating websites. Their TikTok videos often show off the seller’s current inventory of vapes, including disposables like Puff Bar in teen-friendly flavors.
Essentially, the sellers are using TikTok as a way to create vape advertisements they don’t have to pay for that are capable of reaching young consumers — an audience whose interest in vaping hasn’t necessarily declined because of the FDA’s action.
According to nonprofit tobacco control organization Truth Initiative’s latest study, use of Juul decreased between 2019 and 2020, but it remains the most popular e-cigarette brand among 10th and 12th graders who were current vapers at 41%. The report also found that disposable products such as Puff Bar (8%) and Smok (13.1%) have gained during this time.
“Taken together, the 2020 National Youth Tobacco Survey (NYTS) and the new e-cigarette sales data report illustrate how the current federal policy enabled youth to quickly migrate to menthol e-cigarettes (especially Juul menthol pods) when mint-flavored products were removed from the marketplace, and for inexpensive, flavored disposable e-cigarettes such as Puff Bar to soar in popularity,” Truth stated in September 2020.
“With kid magnet names like cotton candy and banana ice, the market share of disposable products nearly doubled in just 10 months from August 2019 to May 2020,” it said.
The scale of the problem on TikTok is also significant.
Today, U.S. teens account for an estimated 32.5% of TikTok’s U.S. active users, according to third-party estimates published by Statista. The company has around 100 million monthly active users in the U.S., it said last year.
Meanwhile, videos tagged with popular vape and e-cigarette brands and keywords have racked up hundreds of millions of views.
For example, the hashtag for leading vape brand Juul (#juul) has 623.9 million views on TikTok, as of the time of writing.
Puff Bar, the maker of a single-use vaping product with Chinese origins, has 449.8 million views for the hashtag #puffbar. Other brands have some traction, as well. #NJOY has 55.3 million views, #smok has 40.1 million views, and British Tobacco’s #Vuse has 5 million views.
These are just the views associated with the hashtag itself. For every search, there are multiple variations. For instance, #puffbars, #puffbarplus and #puffbardealer have 66.8 million views, 9.6 million views and 8.9 million views, respectively. Tags like #juulgang (590.4 million views) have become popular enough that anti-vaping content creators have adopted them as a means of counter-programming against vaping content.
These trends are particularly concerning given the large, young demographic that uses TikTok. A third of its U.S. users may be 14 or under, in fact.
In the U.S. App Store, TikTok is rated for ages 12 and up and on Google Play, its content rating is “Teen.” But while TikTok has modified the default privacy settings for young people’s accounts and has been quick to block other controversial hashtags in the past (like those around U.S. election conspiracies), it has allowed vaping-related content to remain easy to find.
In addition to the popular vaping hashtags prevalent on TikTok, we uncovered numerous vape sellers operating under obvious account names such as “@puffsonthelow,” “@PuffUniverse” and “@Puffbarcafe,” for example. Their pages were filled with vape videos boldly marketing their current selections, hashtagged with vape-related terms like #puffbarchallenge, #puffplus, #vapetricks and others.
In some cases, we found vape sellers had even tagged their videos with #kids and other trending tags.
Knowing that their target market is often teenage vapers, many videos depicted how the seller could package the vape inside another product or hide it in the stuffing so parents wouldn’t find out. We saw videos of vapes packaged underneath candy, inside makeup bags, inside socks, underneath other lager products, and more.
Through links published to the account’s profile or referenced in the videos, TikTok users are redirected to the sellers’ websites or even Discord channels where they would only sometimes be presented with an age verification pop-up.
Often, they could just add items to a basket and check out. Many sellers also directed their customers to pay using PayPal, Venmo and/or Cash App, instead of accepting standard credit card payments.
None of this is legal, according to the Campaign for Tobacco Free Kids, a leading American nonprofit focused on reducing tobacco consumption, particularly among youth.
“It’s illegal to market these products or to engage in marketing that appeals directly to anybody under the age of 21,” Matt Myers, the president of the Campaign for Tobacco Free Kids, told TechCrunch. “And it’s illegal to actually conduct a sales transaction without age verification.”
Image Credits: TikTok screenshot
Plus, he adds, clicking a box on a website that says “I’m over 21,” does not qualify as a legal age verification for making these sales.
The FDA hasn’t issued specific guidance around online retail, but the law is clear that checking IDs is required to ensure retailers aren’t selling to underage users. That’s not happening with a pop-up box, and often there’s no box at all.
In addition, the FDA reminded TechCrunch that Congress recentlyestablished new limits on the mailing and delivery of e-cigarettes and other tobacco products through the United States Postal Service and through other carriers, which should limit access to these sorts of products through online retail purchases.
Myers, however, points out that the current FDA guidelines have made enforcement of this sort of “social” vape marketing more difficult than necessary.
“The images you’re seeing, the use of influencers, and the kinds of offers you’re seeing are governed by a federal standard by the FDA, which is very broad and very general,” Myers says. “The FDA’s failure to articulate clear, specific guidelines means that everyone is in a constant what I call ‘whack-a-mole.'”
Enforcement, then, often depends on the FDA stepping in, which Myers says happens “on a very sporadic basis.”
“In many respects, the behaviors, the actions and the things you’re seeing do violate the law. But the mechanisms for implementing it that were put in place under this past administration are woefully weak and inadequate,” he says.
Image Credits: screenshots of TikTok
Another complicating factor is that public health groups — like the Campaign for Tobacco Free Kids, for instance — don’t have a relationship with TikTok, as they do with other social networks.
Over the last couple of years, over 100 public health groups came together to ask leading social networks like Facebook, Instagram, Twitter and Snapchat to clamp down on tobacco-related content and the use of influencers in marketing. As a result of these efforts, Facebook and Instagram implemented new rules to prohibit social media influencers from promoting tobacco-related products and developed algorithms to pick up on that sort of content.
Overall, the health organizations have reported seeing a reduction in tobacco and vape content on top social platforms, but these efforts have not yet included TikTok.
The Campaign for Tobacco Free Kids has not given TikTok a comprehensive review, Myers admits, due to the app still being relatively new. But from what the organization has seen so far, TikTok is of growing concern.
“We’ve seen some of the most egregious marketing, use of influencers, direct offers of sale to young people [which] appear to be gravitating over to TikTok,” Myers says. “And we don’t see any evidence that TikTok has actually done anything.”
TikTok can’t claim ignorance of the problem, either.
Image Credits: TikTok screenshot
When a vape seller who unabashedly advertised “no ID check” was reported to TikTok through its built-in reporting mechanism, TikTok’s content moderation team said the content didn’t violate its guidelines. This same response was given when other vape sellers were reported, as well. (See below.)
TikTok claims this shouldn’t be happening. The company told us that it will remove accounts dedicated to posting vaping or e-cigarette content as soon as it becomes aware of them, and will reset account bios that link to off-platform tobacco or vaping sites.
It also says its Community Guidelines prohibit content that suggests, depicts, imitates, or promotes the possession or consumption of tobacco by a minor, and content that offers instruction targeting minors on how to buy, sell, or trade tobacco. And it doesn’t permit tobacco ads.
Image Credits: screenshots of TikTok reports
Reached for comment over whether it was aware of the problems on TikTok, an FDA spokesperson said it does not discuss specific compliance and enforcement activities.
However, the spokesperson said the agency will closely monitor retailer, manufacturer, importer, and distributor compliance with federal tobacco laws and regulations and take corrective action when violations occur. In addition, the FDA said it conducts routine monitoring and surveillance of tobacco labeling, advertising and other promotional activities, including activities on the internet.
What’s been making matters more confusing is that the FDA has been accepting premarket applications for flavored vape devices, but has so far refused to list which companies — Puff Bar or otherwise — may have filed for these. That means health organizations don’t know which products the FDA has under review.
But the Agency told TechCrunch that regardless of whether a premarket application has been submitted, it’s enforcing lack of marketing authorization for any product where the manufacturer “is not taking adequate measures to prevent youth access to these products.”
That statement would then include these online Puff Bar retailers and their TikTok marketing efforts.
The FDA added that it has taken action against Puff Bar, specifically, in recent days.
It sent a warning letter to Cool Clouds Distribution, Inc. d/b/a Puff Bar, last July, notifying the company that it was marketing new tobacco products that lacked marketing authorization and that such products, as a result, were adulterated and misbranded.
Earlier this month, as part of an ongoing joint operation with the FDA, U.S. Customs and Border Protection seized 33,681 units of e-cigarettes, which included disposable flavored e-cigarette cartridges resembling the Puff Bar brand, including Puff XXL and Puff Flow, we’re told.
TikTok confirmed the activity we’re documenting is in violation of its guidelines and policies, but could not explain why there’s been such a disconnect between that policy and its enforcement actions.
“We are committed to the safety and well-being of our TikTok community, and we strictly prohibit content that depicts or promotes the possession or consumption of tobacco and drugs by minors,” a TikTok spokesperson told TechCrunch. “We will remove accounts that are identified as being dedicated to promoting vaping, and we do not allow ads for vaping products.”
from Social – TechCrunch https://ift.tt/3pmvxwH
via IFTTT
Remember the app audit Facebook founder Mark Zuckerberg promised to carry out a little under three years ago at the height of the Cambridge Analytica scandal? Actually the tech giant is very keen that you don’t.
The UK’s information commissioner just told a parliamentary subcommittee on online harms and disinformation that a secret arrangement between her office and Facebook prevents her from publicly answering whether or not Facebook contacted the ICO about completing a much-trumpeted ‘app audit’.
“I think I could answer that question with you and the committee in private,” information commissioner Elizabeth Denham told questioner, Kevin Brennan, MP.
Pressed on responding, then and there, on the question of whether Facebook ever notified the regulator about completing the app audit — with Brennan pointing out “after all it was a commitment Mark Zuckerberg gave in the public domain before a US Senate committee” — Denham referred directly to a private arrangement with Facebook which she suggested prevented her from discussing such details in public.
“It’s part of an agreement that we struck with Facebook,” she told the committee. “In terms of our litigation against Facebook. So there is an agreement that’s not in the public domain and that’s why I would prefer to discuss this in private.”
The UK Information Commissioner has previously passed information to FB re: Cambridge Analytica, and has a secret legal settlement w/ them
This will be a worry to anyone who has passed information to them (e.g. whistleblowers)
In October 2019 Facebook settled with the UK’s data protection watchdog — agreeing to pay in full a £500,000 penalty announced by the ICO in 2018 in relation to the Cambridge Analytica breach but which Facebook had been appealing.
When it settled with the ICO Facebook did not admit liability. It had earlier secured a win, from a first-tier legal tribunal that had held June that “procedural fairness and allegations of bias” against the regulator should be considered as part of its appeal, so its litigation against Facebook had got off to a bad start — likely providing the impetus for the ICO to settle with Facebook’s private army of in-house lawyers.
In a statement at the time, covering the bare bones of the settlement, the ICO said Denham considered the agreement “best serves the interests of all UK data subjects who are Facebook users”.
There was no mention of any ‘gagging clauses’ in that disclosure. But the regulator did note that the terms of the agreement gave Facebook permission to “retain documents disclosed by the ICO during the appeal for other purposes, including furthering its own investigation into issues around Cambridge Analytica”.
So — at a stroke — Facebook gained control of a whole lot of strategically important information.
The settlement looks to have been extremely convenient for Facebook. Not only was it fantastically cheap (Facebook paid $5BN to settle with the FTC in the wake of the Cambridge Analytica scandal just a short while later); and not only did it provide Facebook with a trove of ICO-obtained data to do its own digging into Cambridge Analytica safely out of the public eye; but it also ensured the UK regulator would be restricted in what it could say publicly.
To the point where the information commissioner has refused to say anything about Facebook’s post-Cambridge Analytica app audit in public.
The ICO seized a massive trove of data from the disgraced (and since defunct) company which had become such a thorn in Facebook’s side, after raidingCambridge Analytica’s UK offices in early 2018. How much of that data ended up with Facebook via the ICO settlement is unclear.
Interestingly, the ICO also never produced a final report on its Cambridge Analytica investigation.
Instead it sent a letter to the DCMS committee last year — in which it set out a number of conclusions, confirming its view that the umbrella of companies of which CA was a part had been aggregating datasets from commercial sources to try to “make predictions on personal data for political alliance purposes”, as it put it; also confirming the improperly obtained Facebook data had been incorporated into a pre-existing database containing “voter file, demographic and consumer data for US individuals”.
The ICO also said then that its investigation did not find evidence of the Facebook data that had been sold to Cambridge Analytica had been used for political campaigning associated with the UK’s Brexit Referendum. But there was no overarching report detailing the underlying workings via which the regulator got to its conclusions.
So, again from Facebook’s perspective, a pretty convenient outcome.
Asked today by the DCMS committee why the regulator had not produced the expected final report on Cambridge Analytica, Denham pointed to a number of other reports it put out over the course of the multi-year probe, such as audits of UK political parties and an investigation into credit reporting agencies.
“The letter was extensive,” she also argued. “My office produced three reports on the investigation into the misuse of data in political campaigning. So we had a policy report and we had two enforcement reports. So we had looked at the entire ecosystem of data sharing and campaigning… and the strands of that investigation are reported out sufficiently, in my view, in all of our work.”
“Taken together the letter, which was our final line on the report, with the policy and the enforcement actions, prosecutions, fines, stop processing orders, we had done a lot of work in this space — and what’s important here is that we have really pulled back the curtain on the use of data in democracy which has been taken up by… many organizations and parliamentarians around the world,” she added.
Denham also confirmed to the committee that the ICO has retained data related to the Cambridge Analytica investigation — which could be of potential use to other investigations still ongoing around the world. But she denied that her office had been asked by the US Senate Intelligence Committee to provide it with information obtained from Cambridge Analytica — seemingly contradicting an earlier report by the US committee that suggested it had been unable to obtain sought for information. (We’ve contacted the committee to ask about this.)
Denham did say evidence obtained from Cambridge Analytica was shared with the FTC, SEC and with states attorneys general, though.
We’ve also reached out to Facebook about its private arrangement with the ICO, and to ask again about the status of its post-Cambridge Analytica ‘app audit’. (And will update this report with any response.)
The company has produced periodic updates about the audit’s progress, saying in May 2018 that around 200 apps had been suspended as a result of the internal probe, for example.
Then in August 2019 Facebook also claimed to the DCMS committee that the app audit was “ongoing”.
In its original audit pledge — in March 2018 — Zuckerberg promised a root and branch investigation into any other ‘sketchy’ apps operating on Facebook’s platform, responding in a ‘crisis’ length Facebook post to the revelations that a third party had illicitly obtained data on millions of users with the aim of building psychographic profiles for voter targeting. It later turned out that an app developer, operating freely on Facebook’s platform under existing developer policies, had sold user data to Cambridge Analytica.
“We will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity,” Zuckerberg wrote at the time. “We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data [Aleksandr] Kogan misused here as well.”
It’s notable that the Facebook founder did not promise to transparently and publicly report audit findings. This is of course what ‘self regulation’ looks like. Invisible final ‘audit’ reports.
An ‘audit’ that’s entirely controlled by an entity deeply implicated in core elements of what’s being scrutinized obviously isn’t worth the paper it’s (not) written on. But, in Facebook’s case, this opened-but-never-closed ‘app audit’ appears to have served its crisis PR purpose.
The social media company is announcing that it has acquired Revue, a Dutch startup that allows users to publish and monetize email newsletters. While Revue hasn’t driven the same wave of “is this the future of media?” think pieces as Substack, it counts major publishers like Vox Media and The Markup as customers.
Newsletters aren’t the most obvious fit for Twitter’s platform, but in a blog post, Product Lead Kayvon Beykpour and VP of Publisher Products Mike Park suggested that that this is a new way for Twitter to serve writers and publishers who have built a following with their tweets.
“Our goal is to make it easy for them to connect with their subscribers, while also helping readers better discover writers and their content,” Beykpour and Park wrote. “We’re imagining a lot of ways to do this, from allowing people to sign up for newsletters from their favorite follows on Twitter, to new settings for writers to host conversations with their subscribers. It will all work seamlessly within Twitter.”
They also suggested that this will give writers additional ways to make money. Revue already supports paid subscriptions, and Beykpour and Park said that the company will continue developing new monetization features, “whether it’s helping broaden revenue streams or serving as a cornerstone of someone’s business.”
They added that Twitter will continue to operate Revue as a standalone product, with its team remaining “focused on improving the ways writers create their newsletters, build their audience and get paid for their work.” The company is also making the platform’s pro features free for all users and lowering the fee charged on paid newsletters to 5%.
The financial terms of the acquisition were not disclosed. According to Crunchbase, Revue had raised €400,000 from various angel investors.
If social networks and other platforms are to get a handle on disinformation, it’s not enough to know what it is — you have to know how people react to it. Researchers at MIT and Cornell have some surprising but subtle findings that may affect how Twitter and Facebook should go about treating this problematic content.
MIT’s contribution is a counterintuitive one. When someone encounters a misleading headline in their timeline, the logical thing to do would be to put a warning before it so that the reader knows it’s disputed from the start. Turns out that’s not quite the case.
The study of nearly 3,000 people had them evaluating the accuracy of headlines after receiving different (or no) warnings about them.
“Going into the project, I had anticipated it would work best to give the correction beforehand, so that people already knew to disbelieve the false claim when they came into contact with it. To my surprise, we actually found the opposite,” said study co-author David Rand in an MIT news article. “Debunking the claim after they were exposed to it was the most effective.”
When a person was warned beforehand that the headline was misleading, they improved in their classification accuracy by 5.7 percent. When the warning came simultaneously with the headline, that improvement grew to 8.6 percent. But if shown the warning afterwards, they were 25 percent better. In other words, debunking beat “prebunking” by a fair margin.
The team speculated as to the cause of this, suggesting that it fits with other indications that people are more likely to incorporate feedback into a preexisting judgment rather than alter that judgment as it’s being formed. They warned that the problem is far deeper than a tweak like this can fix.
“There is no single magic bullet that can cure the problem of misinformation,” said co-author Adam Berinsky. “Studying basic questions in a systematic way is a critical step toward a portfolio of effective solutions.”
The study from Cornell is equal parts reassuring and frustrating. People viewing potentially misleading information were reliably influenced by the opinions of large groups — whether or not those groups were politically aligned with the reader.
It’s reassuring because it suggests that people are willing to trust that if 80 out of 100 people thought a story was a little fishy, even if 70 of those 80 were from the other party, there might just be something to it. It’s frustrating because of how seemingly easy it is to sway an opinion simply by saying that a large group thinks it’s one way or the other.
“In a practical way, we’re showing that people’s minds can be changed through social influence independent of politics,” said graduate student Maurice Jakesch, lead author of the paper. “This opens doors to use social influence in a way that may de-polarize online spaces and bring people together.”
Partisanship still played a role, it must be said — people were about 21 percent less likely to have their view swayed if the group opinion was led by people belonging to the other party. But even so people were very likely to be affected by the group’s judgment.
Part of why misinformation is so prevalent is because we don’t really understand why it’s so appealing to people, and what measures reduce that appeal, among other simple questions. As long as social media is blundering around in darkness they’re unlikely to stumble upon a solution, but every study like this makes a little more light.
Twitter pilots a new tool to fight disinformation, Apple brings celebrity-guided walks to the Apple Watch and Clubhouses raises funding. This is your Daily Crunch for January 25, 2021.
With Birdwatch, users will be able to flag tweets that they find misleading, write notes to add context to those tweets and rate the notes written by others. This is supposed to be a complement to the existing system where Twitter removes or labels particularly problematic tweets, rather than a replacement.
What remains to be seen: How Twitter will handle it when two or more people get locked into a battle and post a flurry of conflicting notes about whether a tweet is misleading or not.
The tech giants
Walking with Dolly — Apple discusses how and why it brought Time to Walk to the Watch.
Taboola is going public via SPAC — The transaction is expected to close in the second quarter, and the combined company will trade on the New York Stock Exchange under the ticker symbol TBLA.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
from Social – TechCrunch https://ift.tt/36tJYrt
via IFTTT
The goal, as explained in a blog post by Twitter’s Vice President of Product Keith Coleman, is to expand beyond the labels that the company already applies to controversial or potentially misleading tweets, which he suggested are limited to “circumstances where something breaks our rules or receives widespread public attention.”
Coleman wrote that the Birdwatch approach will “broaden the range of voices that are part of tackling this problem.” That brings a broader range of perspectives to these issues and goes beyond the simple question of, “Is this tweet true or not?” It may also take some of the heat off Twitter for individual content moderation decisions.
Users can sign up on the Birdwatch site to flag tweets that they find misleading, add context via notes and rate the notes written by other contributors based on whether they’re helpful or not. These notes will only be visible on the Birdwatch site for now, but it sounds like the company’s goal is to incorporate them to the main Twitter experience.
“We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable,” Coleman said. “Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.”
Given the potential for plenty of argument and back-and-froth on contentious tweets, it remains to be seen how Twitter will present these notes in a way that isn’t confusing or overwhelming, or how it can avoid weighing in on some of these arguments. The company said Birdwatch will use rank content based on algorithmic “reputation and consensus systems,” with the code shared publicly. (All notes contributed to Birdwatch will also be available for download.) You can read more about the initial ranking system here.
“We know there are a number of challenges toward building a community-driven system like this — from making it resistant to manipulation attempts to ensuring it isn’t dominated by a simple majority or biased based on its distribution of contributors,” Coleman said. “We’ll be focused on these things throughout the pilot.”