This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

Happy Birthday Cindy Moon!


Spiders will forever be a dominant force in the Marvel universe. Spider’s come and go, but every one of them holds a place in our hearts. There’s the original Spider-Man Peter Parker and […]

The post Happy Birthday Cindy Moon! appeared first on Geek.com.



from Geek.com https://ift.tt/2jguduK
via IFTTT

New Nintendo Patent Could Take Trading Cards to the Next Level


Trading card games are just as popular as ever, but they’re decidedly low-tech in comparison to video games. There have been attempts with the Pokémon Trading Card Game and Magic the Gathering to […]

The post New Nintendo Patent Could Take Trading Cards to the Next Level appeared first on Geek.com.



from Geek.com https://ift.tt/2HHp2Pc
via IFTTT

Years of Dog Breeding Has Transformed Man’s Best Friend


With gene-editing technology CRISPR on the rise, it’s only a matter of time before scientists begin breeding humans. So what would a world of test-tube babies actually look like? Consider man’s best friend. […]

The post Years of Dog Breeding Has Transformed Man’s Best Friend appeared first on Geek.com.



from Geek.com https://ift.tt/2I1irT7
via IFTTT

6 Comic Characters I’d Love to See in the TV Shows or Movies


There is a TV show or movie for a million different comic characters right now. And there are still more to come with Cloak and Dagger slated for the summer. We are living […]

The post 6 Comic Characters I’d Love to See in the TV Shows or Movies appeared first on Geek.com.



from Geek.com https://ift.tt/2FtJfGp
via IFTTT

Hands-On: Nintendo Labo is Fun Cardboard STEM


We’ve been messing with the cardboard contraptions of Nintendo Labo for over a week now. And honestly, our opinions haven’t changed that much since our first big hands-on with the Nintendo Switch DIY […]

The post Hands-On: Nintendo Labo is Fun Cardboard STEM appeared first on Geek.com.



from Geek.com https://ift.tt/2nCAvGK
via IFTTT

Contruction Robot Can Lay 1000 Bricks An Hour


Those in the business of automating things will tell you that there are three main kinds of tasks that are perfect for robots. If it’s dull, dirty, or dangerous job… let a robot […]

The post Contruction Robot Can Lay 1000 Bricks An Hour appeared first on Geek.com.



from Geek.com https://ift.tt/2KrLNIM
via IFTTT

AI-Generated Nudes Are More Scary Than Scintillating

artificial intelligence

A West Virginian teenager taught artificial intelligence to generate nude portraits. And this is why we can’t have nice things. By feeding thousands of nude portraits into a generative adversarial network (GAN), Robbie […]

The post AI-Generated Nudes Are More Scary Than Scintillating appeared first on Geek.com.



from Geek.com https://ift.tt/2r9SgQf
via IFTTT

Oscar Mayer Trades Bacoin Cryptocurrency For Actual Bacon


Oscar Mayer wants you to put your money where your mouth is with Bacoin—the first bacon-based cryptocurrency. Go ahead, register online to invest in and mine Bacoin (pronounced BAY-COIN), redeemable for packs of […]

The post Oscar Mayer Trades Bacoin Cryptocurrency For Actual Bacon appeared first on Geek.com.



from Geek.com https://ift.tt/2HFLDzq
via IFTTT

Facebook is trying to block Schrems II privacy referral to EU top court

Facebook’s lawyers are attempting to block a High Court decision in Ireland, where its international business is headquartered, to refer a long-running legal challenge to the bloc’s top court.

The social media giant’s lawyers asked the court to stay the referral to the CJEU today, Reuters reports. Facebook is trying to appeal the referral by challenging Irish case law — and wants a stay granted in the meanwhile.

The case relates to a complaint filed by privacy campaigner and lawyer Max Schrems regarding a transfer mechanism that’s currently used by thousands of companies to authorize flows of personal data on EU citizens to the US for processing. Though Schrems was actually challenging the use of so-called Standard Contractual Clauses (SCCs) by Facebook, specifically, when he updated an earlier complaint on the same core data transfer issue — which relates to US government mass surveillance practices, as revealed by the 2013 Snowden disclosures — with Ireland’s data watchdog.

However the Irish Data Protection Commissioner decided to refer the issue to the High Court to consider the legality of SCCs as a whole. And earlier this month the High Court decided to refer a series questions relating to EU-US data transfers to Europe’s top court — seeking a preliminary ruling on a series of fundamental questions that could even unseat another data transfer mechanism, called the EU-US Privacy Shield, depending on what CJEU judges decide.

An earlier legal challenge by Schrems — which was also related to the clash between US mass surveillance programs (which harvest data from social media services) and EU fundamental rights (which mandate that web users’ privacy is protected) — resulted in the previous arrangement for transatlantic data flows being struck down by the CJEU in 2015, after standing for around 15 years.

Hence the current case being referred to by privacy watchers as ‘Schrems II’. You can also see why Facebook is keen to delay another CJEU referral if it can.

According to comments made by Schrems on Twitter the Irish High Court reserved judgement on Facebook’s request today, with a decision expected within a week…

Facebook’s appeal is based on trying to argue against Irish case law — which Schrems says does not allow for an appeal against such a referral, hence he’s couching it as another delaying tactic by the company:

We reached out to Facebook for comment on the case. At the time of writing it had not responded.

In a statement from October, after an earlier High Court decision on the case, Facebook said:

Standard Contract Clauses provide critical safeguards to ensure that Europeans’ data is protected once transferred to companies that operate in the US or elsewhere around the globe, and are used by thousands of companies to do business. They are essential to companies of all sizes, and upholding them is critical to ensuring the economy can continue to grow without disruption.

This ruling will have no immediate impact on the people or businesses who use our services. However it is essential that the CJEU now considers the extensive evidence demonstrating the robust protections in place under Standard Contractual Clauses and US law, before it makes any decision that may endanger the transfer of data across the Atlantic and around the globe.



from Social – TechCrunch https://ift.tt/2r8Wqag
via IFTTT

GEEK PICK OF THE DAY: Star Wars R2D2 French Press Coffee Maker


I like coffee. I’m drinking some right now! While nothing can beat a smooth espresso drink pulled from a commercial-grade Nuova or La Povoni, not everyone has thousands of dollars and several square […]

The post GEEK PICK OF THE DAY: Star Wars R2D2 French Press Coffee Maker appeared first on Geek.com.



from Geek.com https://ift.tt/2w58yz8
via IFTTT

Westworld Goes Back in Time and Dolores Remembers Everything


As if we didn’t have enough timelines to worry about this season, last night’s episode introduced yet another. It starts with a cold open, which is new for Westworld. For a second, it looks […]

The post Westworld Goes Back in Time and Dolores Remembers Everything appeared first on Geek.com.



from Geek.com https://ift.tt/2JBirq3
via IFTTT

Star Wars Resistance is the Anime Sequel to Star Wars Rebels I Didn’t Know I Wanted

Star Wars: The Role-Playing Game

The Star Wars and anime fandoms overlap considerably, so it’s no surprise that Disney is tapping into that with the upcoming follow-up to Star Wars Rebels. Star Wars Resistance will be an anime, […]

The post Star Wars Resistance is the Anime Sequel to Star Wars Rebels I Didn’t Know I Wanted appeared first on Geek.com.



from Geek.com https://ift.tt/2I18e9k
via IFTTT

Twitter also sold data access to Cambridge Analytica researcher

Since it was revealed that Cambridge Analytica improperly accessed the personal data of millions of Facebook users, one question has lingered in the minds of the public: What other data did Dr. Aleksandr Kogan gain access to?

Twitter confirmed to The Telegraph on Saturday that GSR, Kogan’s own commercial enterprise, had purchased one-time API access to a random sample of public tweets from a five-month period between December 2014 and April 2015. Twitter told Bloomberg that, following an internal review, the company did not find any access to private data about people who use Twitter.

Twitter sells API access to large organizations or enterprises for the purposes of surveying sentiment or opinion during various events, or around certain topics or ideas.

Here’s what a Twitter spokesperson said to The Telegraph:

Twitter has also made the policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica. This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules.

Obviously, this doesn’t have the same scope as the data harvested about users on Facebook. Twitter’s data on users is far less personal. Location on the platform is opt-in and generic at that, and users are not forced to use their real name on the platform.

Still, it shows just how broad the Cambridge Analytica data collection was ahead of the 2016 election.

We reached out to Twitter and will update when we hear back.



from Social – TechCrunch https://ift.tt/2HE9dJ2
via IFTTT

Europe eyeing bot IDs, ad transparency and blockchain to fight fakes

European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.

This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.

The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

Bots, fake accounts, political ads, filter bubbles

In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.

Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.

Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.

That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).

Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.

Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.

The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.

We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.

Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.

Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 

On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.

In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.

Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.

The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.

Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.

In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 

The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.

Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.

It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.

It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.

It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.

The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.

It is also proposing a range of other measures to tackle the online disinformation issue — including:

  • An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
  • A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
  • Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
  • Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
  • Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
  • Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”

It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.

Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”

The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.

“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 

The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.

And if self-regulation fails…

In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”

And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.

For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.

Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.

In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.

Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”

A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.

At the time of writing Google had not responded to a request for comment.

Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.

 



from Social – TechCrunch https://ift.tt/2vYxesQ
via IFTTT

Shadow of the Tomb Raider Takes the Franchise to New Heights (and Depths)


Last Friday during the annual Tribeca Games Festival, Square-Enix officially unveiled Shadow of the Tomb Raider. The game’s existence hasn’t exactly been a secret, but it’s nice to know it’s actually real. The […]

The post Shadow of the Tomb Raider Takes the Franchise to New Heights (and Depths) appeared first on Geek.com.



from Geek.com https://ift.tt/2rfoIQP
via IFTTT

Blue Origin Successfully Completes Another Launch of New Shepard Rocket


Blue Origin successfully completed the eighth launch of its New Shepard suborbital crewed vehicle. On Sunday, the reusable rocket shot into a cloudless sky above west Texas, reaching an impressive 351,000 feet. “That’s […]

The post Blue Origin Successfully Completes Another Launch of New Shepard Rocket appeared first on Geek.com.



from Geek.com https://ift.tt/2JDsw5H
via IFTTT

5 More Podcasts for Comic Fans


As I moved over to start a humble little spot in comics, I realized that there was a lot of things out there that could help you learn about the craft, the people […]

The post 5 More Podcasts for Comic Fans appeared first on Geek.com.



from Geek.com https://ift.tt/2HDGJz6
via IFTTT

11 Adorable Freaks of Nature You Must See to Believe


Woah, have you heard of this two-headed porpoise thing? Ugh. It’s disgusting, right? I mean, yeah, it’s still one of God’s creatures, and She loves each and every one of them, but I […]

The post 11 Adorable Freaks of Nature You Must See to Believe appeared first on Geek.com.



from Geek.com https://ift.tt/2iKAzVX
via IFTTT

The Best Original Shows On Netflix Streaming

Stranger Things

It took Netflix quite a while to get into the original programming game, but unlike some competitors (ahem, Yahoo), they’re in it to win it. Starting off with new episodes of Arrested Development […]

The post The Best Original Shows On Netflix Streaming appeared first on Geek.com.



from Geek.com https://ift.tt/2k6eUpE
via IFTTT

The Idea That We Will Science Our Way Out of Climate Change is Dangerous, Say Scientists


Scientists are becoming increasingly concerned with the idea that people won’t make significant changes to their lifestyles and that governments won’t commit to radical action — instead relying on the deluded and dangerous […]

The post The Idea That We Will Science Our Way Out of Climate Change is Dangerous, Say Scientists appeared first on Geek.com.



from Geek.com https://ift.tt/2HXA1qV
via IFTTT

Subtitled Sci-fi and Adam Sandler: Let Geek Tell You What to Watch This Weekend


Forget Peak TV. We’re living in an age of Peak Content, period. There are so many cool shows and movies and games and weird internet videos you could consume at any given moment […]

The post Subtitled Sci-fi and Adam Sandler: Let Geek Tell You What to Watch This Weekend appeared first on Geek.com.



from Geek.com https://ift.tt/2Fr86vS
via IFTTT

Scientists Keep Big Brain Alive Without the Body for 36 Hours


Holy Shit. Scientists report being able to keep a pig’s brain alive — outside the body — for a day and a half. I know I’m just kinda restating the headline, but that’s […]

The post Scientists Keep Big Brain Alive Without the Body for 36 Hours appeared first on Geek.com.



from Geek.com https://ift.tt/2KkZo4o
via IFTTT

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy-hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.



from Social – TechCrunch https://ift.tt/2vUlg3j
via IFTTT

9 Things Dwayne ‘The Rock’ Johnson Should Fight Next


I’m gonna get incredibly real with you guys for a moment: I think about Dwayne ‘The Rock’ Johnson more than anything else in this world. I think about him more than work, my […]

The post 9 Things Dwayne ‘The Rock’ Johnson Should Fight Next appeared first on Geek.com.



from Geek.com https://ift.tt/2JBBtwt
via IFTTT

Once Upon a Time Gives its Villain a Motive Before it Ends


Once Upon a Time just gave its villain a tragic backstory, so you know what that means. This season, and series, is almost about to end. You always know things are about to go […]

The post Once Upon a Time Gives its Villain a Motive Before it Ends appeared first on Geek.com.



from Geek.com https://ift.tt/2vU8Ngb
via IFTTT

Amazon and Tesla Have Some of the Most Dangerous Facilities in the US


While tech giants may once have been the golden bois of the modern world, they’ve definitely been pretty bad of late. Yahoo failed to notify its customers of one of the largest data […]

The post Amazon and Tesla Have Some of the Most Dangerous Facilities in the US appeared first on Geek.com.



from Geek.com https://ift.tt/2r8rMy1
via IFTTT

Facebook shrinks fake news after warnings backfire

Tell someone not to do something and sometimes they just want to do it more. That’s what happened when Facebook put red flags on debunked fake news. Users who wanted to believe the false stories had their fevers ignited and they actually shared the hoaxes more. That led Facebook to ditch the incendiary red flags in favor of showing Related Articles with more level-headed perspectives from trusted news sources.

But now it’s got two more tactics to reduce the spread of misinformation, which Facebook detailed at its Fighting Abuse @Scale event in San Francisco. Facebook’s director of News Feed integrity Michael McNally and data scientist Lauren Bose held a talk discussing all the ways it intervenes. The company is trying to walk a fine line between censorship and sensibility.

These red warning labels actually backfired and made some users more likely to share, so Facebook switched to showing Related Articles

First, rather than call more attention to fake news, Facebook wants to make it easier to miss these stories while scrolling. When Facebook’s third-party fact-checkers verify an article is inaccurate, Facebook will shrink the size of the link post in the News Feed. “We reduce the visual prominence of feed stories that are fact-checked false,” a Facebook spokesperson confirmed to me.

As you can see below in the image on the left, confirmed-to-be-false news stories on mobile show up with their headline and image rolled into a single smaller row of space. Below, a Related Articles box shows “Fact-Checker”-labeled stories debunking the original link. Meanwhile on the right, a real news article’s image appears about 10 times larger, and its headline gets its own space.

 

Second, Facebook is now using machine learning to look at newly published articles and scan them for signs of falsehood. Combined with other signals like user reports, Facebook can use high falsehood prediction scores from the machine learning systems to prioritize articles in its queue for fact-checkers. That way, the fact-checkers can spend their time reviewing articles that are already qualified to probably be wrong.

“We use machine learning to help predict things that might be more likely to be false news, to help prioritize material we send to fact-checkers (given the large volume of potential material),” a spokesperson from Facebook confirmed. The social network now works with 20 fact-checkers in several countries around the world, but it’s still trying to find more to partner with. In the meantime, the machine learning will ensure their time is used efficiently.

Bose and McNally also walked the audience through Facebook’s “ecosystem” approach that fights fake news at every step of its development:

  • Account Creation – If accounts are created using fake identities or networks of bad actors, they’re removed.
  • Asset Creation – Facebook looks for similarities to shut down clusters of fraudulently created Pages and inhibit the domains they’re connected to.
  • Ad Policies – Malicious Pages and domains that exhibit signatures of wrong use lose the ability to buy or host ads, which deters them from growing their audience or monetizing it.
  • False Content Creation – Facebook applies machine learning to text and images to find patterns that indicate risk.
  • Distribution – To limit the spread of false news, Facebook works with fact-checkers. If they debunk an article, its size shrinks, Related Articles are appended and Facebook downranks the stories in News Feed.

Together, by chipping away at each phase, Facebook says it can reduce the spread of a false news story by 80 percent. Facebook needs to prove it has a handle on false news before more big elections in the U.S. and around the world arrive. There’s a lot of work to do, but Facebook has committed to hiring enough engineers and content moderators to attack the problem. And with conferences like Fighting Abuse @Scale, it can share its best practices with other tech companies so Silicon Valley can put up a united front against election interference.



from Social – TechCrunch https://ift.tt/2r6F3Hl
via IFTTT

Studio Ghibli Theme Park Opening in Japan in 2022


There’s plenty of fans that want to live in the amazing worlds of Miyazaki, and in a few short years, you’ll get the chance (at least if you’re in Japan). In 2022, a […]

The post Studio Ghibli Theme Park Opening in Japan in 2022 appeared first on Geek.com.



from Geek.com https://ift.tt/2FnAKMO
via IFTTT

GEEK PICK OF THE DAY: Dungeons and Dragons Hawaiian Shirt


I like loud button-down shirts. My personal aesthetic varies between Parker Lewis’ wardrobe and rayon shirts with a Super Saiyan on them. That’s why, while this shirt might be loud for some, it’s […]

The post GEEK PICK OF THE DAY: Dungeons and Dragons Hawaiian Shirt appeared first on Geek.com.



from Geek.com https://ift.tt/2HvH36W
via IFTTT

This Giant Lego Roller Coaster Is Built For Thrill-seeking Minifigs


Lego has unveiled the latest addition to its theme park attractions. The new set is a massive, fully-functional roller coaster, and like the other rides in the collection it’s big enough for your […]

The post This Giant Lego Roller Coaster Is Built For Thrill-seeking Minifigs appeared first on Geek.com.



from Geek.com https://ift.tt/2jf8jZ1
via IFTTT

Tokyo’s Square Enix Cafe is Fun, Even With Soccer


When you travel, you probably want to experience new foods at good restaurants, really get a feel for the best things the locals eat. Or, if you’re like me, you want to eat […]

The post Tokyo’s Square Enix Cafe is Fun, Even With Soccer appeared first on Geek.com.



from Geek.com https://ift.tt/2KlswZo
via IFTTT

Photographer Captures Crescent Moon on a Pink Cloud


Instagram is littered with #nofilter photos of colorful sunsets, clouds rippling through shades of blue, pink, and orange as dusk settles. But rarely do those images capture moonrises—the first appearance of the Moon […]

The post Photographer Captures Crescent Moon on a Pink Cloud appeared first on Geek.com.



from Geek.com https://ift.tt/2HwiKWv
via IFTTT

This App Tells You How (Legally) Stoned You Are


It’s a weird weed world we now find ourselves living in, at least here in America, and not just on 4/20. State after state is legalizing cannabis not just for medicinal use but […]

The post This App Tells You How (Legally) Stoned You Are appeared first on Geek.com.



from Geek.com https://ift.tt/2JyduhG
via IFTTT

Crypto Company Endorsed By McAfee Exposed Data On 25,000 Investors


By now, most of you know that John McAfee is a big fan of all things privacy-related, like crypto companies. He’s also a big fan of being paid six figures for Tweets that […]

The post Crypto Company Endorsed By McAfee Exposed Data On 25,000 Investors appeared first on Geek.com.



from Geek.com https://ift.tt/2I3oSTg
via IFTTT

11 Best Sci-Fi TV Shows to Binge Watch


There’s nothing that makes a human happier than binge-watching a good show. You’ll sit and let Netflix yell at you to get up and stretch once in awhile, but you can’t stop a […]

The post 11 Best Sci-Fi TV Shows to Binge Watch appeared first on Geek.com.



from Geek.com https://ift.tt/2r3s61x
via IFTTT

Facebook’s Messenger Kids’ app gains a ‘sleep mode’

Facebook’s Messenger Kids, the social network’s new chat app for the under-13 crowd, has been designed to give parents more control over their kids’ contact list. Today, the app is gaining a new feature, “sleep mode,” aimed at giving parents the ability to turn the app off at designated times. The idea is that parents and children will talk about when it’s appropriate to send messages to friends and family, and when it’s time for other activities – like homework or bedtime, for example.

The app, which launched last December, has not been without controversy.

Some see it as a gateway drug for Facebook proper. Others whine that “kids should be playing outside!” – as if kids don’t engage in all sorts of activities, including device usage, at times. And of course, amid Facebook’s numerous scandals around data privacy, it’s hard for some parents to fathom installing a Facebook-operated anything on their child’s phone or tablet.

But the reality, from down here in the parenting trenches, is that kids are messaging anyway and we’re desperately short on tools.

Instead of apps built with children’s and parents’ needs in mind, our kids are fumbling around on their own, making mistakes, then having their devices taken away in punishment.

The truth is, with the kids, it’s too late to put the toothpaste back in the tube. Our children are FaceTime’ing their way through Roblox playdates, they’re texting grandma and grandpa, they’re watching YouTube instead of TV, and they’re begging for too-adult apps like Snapchat – so they can play with the face filters – and Musical.ly, which has a lot of inappropriate content. (Seriously, can someone launch kid-safe versions?)

Until Messenger Kids, parents haven’t been offered any social or messaging apps built with monitoring and education in mind.

I decided to install it on my own child’s device, and I’ll admit being conflicted. But I’m using it with my child as a learning tool. We talk about how to use the app’s features, but also about appropriate messaging behavior – what to chat about, why not to send a dozen stickers at once, and how to politely end a conversation, for example.

Unlike child predator playgrounds like Kik, popularity-focused social apps like Instagram, or apps where messages simply vanish like Snapchat, Messenger Kids lets parents choose the contact list and control the experience. And, as a backup, I have a copy of the app on my own phone, so I can spot check messages sent when I’m not around.

With the new sleep mode feature, I can now turn Messenger Kids off at certain times. That means no more 8 AM video calls to the BFF. (Yes, we’ve discussed this – after the fact. Sorry, BFF’s parents.) And no more messaging right at bedtime, either.

To configure sleep mode, parents access the Messenger Kids controls from the main Facebook app, and tap on the child’s name. You can create different settings for weekdays and weekends. If the child tries to use the app during these times, they’ll instead see a message that says the app is in sleep mode and to come back later.

The control panel is also where parents can add and remove contacts, delete the child’s account, or create a new account.

Facebook suggests that parents have a discussion with kids about the boundaries they’re creating when turning on sleep mode.

[gallery ids="1629819,1629820,1629821,1629822"]

That may seem obvious, but it’s surprisingly not. I’ve actually heard some parents scoff at parental control features because they think it’s about offloading the job of parenting to technology. It’s not. It’s about using tools and parenting techniques together – whether that’s internet off times, device or app “bedtimes,” internet filtering, or whatever other mechanisms parents employ.

I understand if you can’t get past the fact that the app is from Facebook, of all places. Or you have a philosophical point of view on using Facebook products. But Facebook integration means this app could scale. In the few months it’s been live, the app has been download around 325,000 times, according to data from Sensor Tower.

Messenger Kids is a free download on iOS and Android.

 



from Social – TechCrunch https://ift.tt/2vRSdO6
via IFTTT