This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

Siren Has a Wonderfully Strange Atmosphere, but the Story’s a Slog


There’s so much to like about Freeform’s new supernatural mystery show, Siren. It’s take on mermaids is new, unique and very cool. Whenever they’re on screen, the show gets ten times better. Then there’s […]

The post Siren Has a Wonderfully Strange Atmosphere, but the Story’s a Slog appeared first on Geek.com.



from Geek.com https://ift.tt/2Gn6Iy5
via IFTTT

Arbtr wants to create an anti-feed where users can only share one thing at a time

At a time when the models of traditional social networks are being questioned, it’s more important than ever to experiment with alternatives. Arbtr is a proposed social network that limits users to sharing a single thing at any given time, encouraging “ruthless self-editing” and avoiding “nasty things” like endless feeds filled with trivial garbage.

It’s seeking funds on Kickstarter and could use a buck or two. I plan to.

Now, I know what you’re thinking. “Why would I give money to maybe join a social network eventually that might not have any of my friends on it on it? That is, if it ever even exists?” Great question.

The answer is: how else do you think we’re going to replace Facebook? Someone with a smart, different idea has to come along and we have to support them. If we won’t spare the cost of a cup of coffee for a purpose like that, then we deserve the social networks we’ve got. (And if I’m honest, I’ve had very similar ideas over the last few years and I’m eager to see how they might play out in reality.)

The fundamental feature is, of course, the single-sharing thing. You can only show off one item at a time, and when you post a new one, the old one (and any discussion, likes, etc) will be deleted. There will be options to keep logs of these things, and maybe premium features to access them (or perhaps metrics), but the basic proposal is, I think, quite sound — at the very least, worth trying.

Some design ideas for the app. I like the text one but it does need thumbnails.

If you’re sharing less, as Arbtr insists you will, then presumably you’ll put more love behind those things you do share. Wouldn’t that be nice?

We’re in this mess because we bought wholesale the idea that the more you share, the more connected you are. Now that we’ve found that isn’t the case – and in fact we were in effect being fattened for a perpetual slaughter — I don’t see why we shouldn’t try something else.

Will it be Arbtr? I don’t know. Probably not, but we’ve got a lot to gain by giving ideas like this a shot.



from Social – TechCrunch https://ift.tt/2uyViC3
via IFTTT

Who gains from Facebook’s missteps?

When Facebook loses, who wins?

That’s a question for startups that may be worth contemplating following Facebook’s recent stock price haircut. The company’s valuation has fallen by around $60 billion since the Cambridge Analytica scandal surfaced earlier this month and the #DeleteFacebook campaign gained momentum.

That’s a steep drop, equal to about 12 percent of the company’s market valuation, and it’s a decline Facebook appears to be suffering alone. As its shares fell over the past couple of weeks, stocks of other large-cap tech and online media companies have been much flatter.

So where did the money go? It’s probably a matter of perspective. For a Facebook shareholder, that valuation is simply gone. And until executives’ apologies resonate and users’ desire to click and scroll overcomes their privacy fears, that’s how it is.

An alternate view is that the valuation didn’t exactly disappear. Investors may still believe the broad social media space is just as valuable as it was a couple of weeks ago. It’s just that less of that pie should be the exclusive domain of Facebook.

If one takes that second notion, then the possibilities for who could benefit from Facebook’s travails start to get interesting. Of course, there are public market companies, like Snap or Twitter, that might pick up traffic if the #DeleteFacebook movement gains momentum without spreading to other big brands. But it’s in the private markets where we see the highest number of potential beneficiaries of Facebook’s problems.

In an effort to come up with some names, we searched through Crunchbase for companies in social media and related areas. The resulting list includes companies that have raised good-sized rounds in the past couple of years and could conceivably see gains if people cut back on using Facebook or owning its stock.

Of course, people use Facebook for different things (posting photos, getting news, chatting with friends and so on), so we lay out a few categories of potential beneficiaries of a Facebook backlash.

Messaging

Facebook has a significant messaging presence, but it hasn’t been declared the winner. Alternatives like Snap, LINE, WeChat and plain old text messages are also massively popular.

That said, what’s bad for Messenger and Facebook-owned WhatsApp is probably good for competitors. And if more people want to do less of their messaging on Facebook, it helps that there are a number of private companies ready to take its place.

Crunchbase identified six well-funded messaging apps that could fit the bill (see list). Collectively, they’ve raised well over $2 billion — if one includes the $850 million initial coin offering by Telegram.

Increasingly, these private messaging startups are focused on privacy and security, including Wickr, the encrypted messaging tool that has raised more than $70 million, and Silent Circle, another encrypted communications provider that has raised $130 million.

Popular places to browse on a screen

People who cut back on Facebook may still want to spend hours a day staring at posts on a screen. So it’s likely they’ll start staring at something else that’s content-rich, easy-to-navigate and somewhat addictive.

Luckily, there are plenty of venture-backed companies that fit that description. Many of these are quite mature at this point, including Pinterest for image collections, Reddit for post and comment threads and Quora for Q&A (see list).

Granted, these will not replace the posts keeping you up to date on the life events of family and friends. But they could be a substitute for news feeds, meme shares and other non-personal posts.

Niche content

A decline in Facebook usage could translate into a rise in traffic for a host of niche content and discussion platforms focused on sports, celebrities, social issues and other subjects.

Crunchbase News identified at least a half-dozen that have raised funding in recent quarters, which is just a sampling of the total universe. Selected startups run the gamut from The Players’ Tribune, which features first-hand accounts for top athletes, to Medium, which seeks out articles that resonate with a wide audience.

Niche sites also provide a more customized forum for celebrities, pundits and subject-matter experts to engage directly with fans and followers.

Community and engagement

People with common interests don’t have to share them on Facebook. There are other places that can offer more tailored content and social engagement.

In recent years, we’ve seen an increase in community and activity-focused social apps gain traction. Perhaps the most prominent is Nextdoor, which connects neighbors for everything from garage sales to crime reports. We’re also seeing some upstarts focused on creating social networks for interest groups. These include Mighty Networks and Amino Apps.

Though some might call it a stretch, we also added to the list WeWork, recent acquirer of Meetup, and The Guild, two companies building social networks in the physical world. These companies are encouraging people to come out and socially network with other people (even if just means sitting in a room with other people staring at a screen).

Watch where the money goes

Facebook’s latest imbroglio is still too recent to expect a visible impact in the startup funding arena. But it will be interesting to watch in the coming months whether potential rivals in the above categories raise a lot more cash and attract more users.

If there’s demand, there’s certainly no shortage of supply on the investor front. The IPO window is wide open, and venture investors are sitting on record piles of dry powder. It hasn’t escaped notice, either, that social media offerings, like Facebook, LinkedIn and Snap, have generated the biggest exit total of any VC-funded sector.

Moreover, those who’ve argued that it’s too late for newcomers have a history of being proven wrong. After all, that’s what people were saying about would-be competitors to MySpace in 2005, not long before Facebook made it big.



from Social – TechCrunch https://ift.tt/2Ij7TuX
via IFTTT

Saucony’s Dunkin’ Donuts Sneakers Sold Out Quicker Than a Boston Kreme


America runs on Dunkin’—literally. Dunkin’ Donuts and Saucony teamed up to design a sneaker as sweet as a chocolate glaze. Their collaboration celebrates “the symbiotic relationship between coffee, running, and donuts.” Or, as […]

The post Saucony’s Dunkin’ Donuts Sneakers Sold Out Quicker Than a Boston Kreme appeared first on Geek.com.



from Geek.com https://ift.tt/2IhCMjl
via IFTTT

Nonprofit Finds Trace Amounts of Arsenic in Popular Beer, Wine Brands


A number of popular wine and beer brands contain trace amounts of arsenic and heavy metals found in herbicide. As if you needed another reason to cut back on alcohol intake… CBS opened […]

The post Nonprofit Finds Trace Amounts of Arsenic in Popular Beer, Wine Brands appeared first on Geek.com.



from Geek.com https://ift.tt/2pSDqgp
via IFTTT

MyFitnessPal Data Breach Hits 150M Users


Under Armour’s MyFitnessPal program is picking up the pieces after a recent data breach affected 150 million users. In late February, an “unauthorized party” reportedly hacked into the health-tracking service, stealing account usernames, […]

The post MyFitnessPal Data Breach Hits 150M Users appeared first on Geek.com.



from Geek.com https://ift.tt/2pV9UWK
via IFTTT

Dangerous Books and Unfortunate Events: Let Geek Tell You What to Watch This Weekend


Forget Peak TV. We’re living in an age of Peak Content, period. There are so many cool shows and movies and games and weird internet videos you could consume at any given moment […]

The post Dangerous Books and Unfortunate Events: Let Geek Tell You What to Watch This Weekend appeared first on Geek.com.



from Geek.com https://ift.tt/2Fr86vS
via IFTTT

Gotham Lets All the Villains Out of Their Cages


With an episode like this, you can see why Gotham was in such a hurry to bring all its ongoing stories to a good stopping place last week. As it gears up for what looks […]

The post Gotham Lets All the Villains Out of Their Cages appeared first on Geek.com.



from Geek.com https://ift.tt/2J6Krm0
via IFTTT

The real threat to Facebook is the kool-aid turning sour

These kinds of leaks didn’t happen when I started reporting on Facebook eight years ago. It was a tight-knit cult convinced of its mission to connect everyone, but with the discipline of a military unit where everyone knew loose lips sink ships. Motivational posters with bold corporate slogans dotted its offices, rallying the troops. Employees were happy to be evangelists.

But then came the fake news, News Feed addiction, violence on Facebook Live, cyberbullying, abusive ad targeting, election interference, and most recently the Cambridge Analytica app data privacy scandals. All the while, Facebook either willfully believed the worst case scenarios could never come true, was naive to their existence, or calculated the benefits and growth outweighed the risks. And when finally confronted, Facebook often dragged its feet before admitting the extent of the problems.

Inside the social network’s offices, the bonds began to fray. Slogans took on sinister second meanings. The kool-aid tasted different.

Some hoped they could right the ship but couldn’t. Some craved the influence and intellectual thrill of running one of humanity’s most popular inventions, but now question if that influence and their work is positive. Others surely just wanted to collect salaries, stock, and resume highlights but lost the stomach for it.

Now the convergence of scandals has come to a head in the form of constant leaks.

The Trouble Tipping Point

The more benign leaks merely cost Facebook a bit of competitive advantage. We’ve learned it’s building a smart speaker, a standalone VR headset, and a Houseparty split-screen video chat clone.

Yet policy-focused leaks have exacerbated the backlash against Facebook, putting more pressure on the conscience of employees. As blame fell to Facebook for Trump’s election, word of Facebook prototyping a censorship tool for operating in China escaped, triggering questions about its respect for human rights and free speech. Facebook’s content rulebook got out alongside disturbing tales of the filth the company’s contracted moderators have to sift through. Its ad targeting was revealed to be able to pinpoint emotionally vulnerable teens.

In recent weeks, the leaks have accelerated to a maddening pace in the wake of Facebook’s soggy apologies regarding the Cambridge Analytica debacle. Its weak policy enforcement left the door open to exploitation of data users gave third-party apps, deepening the perception that Facebook doesn’t care about privacy.

And it all culminated with BuzzFeed publishing a leaked “growth at all costs” internal post from Facebook VP Andrew “Boz” Bosworth that substantiated people’s worst fears about the company’s disregard for user safety in pursuit of world domination. Even the ensuing internal discussion about the damage caused by leaks and how to prevent them…leaked.

But the leaks are not the disease, just the symptom. Sunken morale is the cause, and it’s dragging down the company. Former Facebook employee and Wired writer Antonio Garcia Martinez sums it up, saying this kind of vindictive, intentionally destructive leak fills Facebook’s leadership with “horror”:

And that sentiment was confirmed by Facebook’s VP of News Feed Adam Mosseri, who tweeted that leaks “create strong incentives to be less transparent internally and they certainly slow us down”, and will make it tougher to deal with the big problems.

Those thoughts weigh heavy on Facebook’s team. A source close to several Facebook executives tells us they feel “embarrassed to work there” and are increasingly open to other job opportunities. One current employee told us to assume anything certain execs tell the media is “100% false”.

If Facebook can’t internally discuss the problems it faces without being exposed, how can it solve them?

Implosion

The consequences of Facebook’s failures are typically pegged as external hazards.

You might assume the government will finally step in and regulate Facebook. But the Honest Ads Act and other rules about ads transparency and data privacy could end up protecting Facebook by being simply a paperwork speed bump for it while making it tough for competitors to build a rival database of personal info. In our corporation-loving society, it seems unlikely that the administration would go so far as to split up Facebook, Instagram, and WhatsApp — one of the few feasible ways to limit the company’s power.

Users have watched Facebook go make misstep after misstep over the years, but can’t help but stay glued to its feed. Even those who don’t scroll rely on it as fundamental utility for messaging and login on other sites. Privacy and transparency are too abstract for most people to care about. Hence, first-time Facebook downloads held steady and its App Store rank actually rose in the week after the Cambridge Analytica fiasco broke. In regards to the #DeleteFacebook movement, Mark Zuckerberg himself said “I don’t think we’ve seen a meaningful number of people act on that.” And as long as they’re browsing, advertisers will keep paying Facebook to reach them.

That’s why the greatest threat of the scandal convergence comes from inside. The leaks are the canary in the noxious blue coal mine.

Can Facebook Survive Slowing Down?

If employees wake up each day unsure whether Facebook’s mission is actually harming the world, they won’t stay. Facebook doesn’t have the same internal work culture problems as some giants like Uber. But there are plenty of other tech companies with less questionable impacts. Some are still private and offer the chance to win big on an IPO or acquisition. At the very least, those in the Bay could find somewhere to work without a spending hours a day on the traffic-snarled 101 freeway.

If they do stay, they won’t work as hard. It’s tough to build if you think you’re building a weapon. Especially if you thought you were going to be making helpful tools. The melancholy and malaise set in. People go into rest-and-vest mode, living out their days at Facebook as a sentence not an opportunity. The next killer product Facebook needs a year or two from now might never coalesce.

And if they do work hard, a culture of anxiety and paralysis will work against them. No one wants to code with their hands tied, and some would prefer a less scrutinized environment. Every decision will require endless philosophizing and risk-reduction. Product changes will be reduced to the lowest common denominator, designed not to offend or appear too tyrannical.

Source: Volkan Furuncu/Anadolu Agency + David Ramos/Getty Images

In fact, that’s partly how Facebook got into this whole mess. A leak by an anonymous former contractor led Gizmodo to report Facebook was suppressing conservative news in its Trending section. Terrified of appearing liberally biased, Facebook reportedly hesitated to take decisive action against fake news. That hands-off approach led to the post-election criticism that degraded morale and pushed the growing snowball of leaks down the mountain.

It’s still rolling.

How to stop morale’s downward momentum will be one of Facebook’s greatest tests of leadership. This isn’t a bug to be squashed. It can’t just roll back a feature update. And an apology won’t suffice. It will have to expel or reeducate the leakers and disloyal without instilling a witchunt’s sense of dread. Compensation may have to jump upwards to keep talent aboard like Twitter did when it was floundering. Its top brass will need to show candor and accountability without fueling more indiscretion. And it may need to make a shocking, landmark act of humility to convince employees its capable of change.

This isn’t about whether Facebook will disappear tomorrow, but whether it will remain unconquerable for the forseeable future.

Growth has been the driving mantra for Facebook since its inception. No matter how employees are evaluated, it’s still the underlying ethos. Facebook has poised itself as a mission-driven company. The implication was always that connecting people is good so connecting more people is better. The only question was how to grow faster.

Now Zuckerberg will have to figure out how to get Facebook to cautiously foresee the consequences of what it says and does while remaining an appealing place to work. “Move slow and think things through” just doesn’t have the same ring to it.



from Social – TechCrunch https://ift.tt/2J9fTjg
via IFTTT

Scientists Discovered a New Human Organ: The Interstitium


You’d think with our massive research budgets and the fact that we tend to know and think a lot about ourselves, we’d know just about everything about ourselves. Now, I get that the […]

The post Scientists Discovered a New Human Organ: The Interstitium appeared first on Geek.com.



from Geek.com https://ift.tt/2E80M63
via IFTTT

Facebook’s mission changed, but its motives didn’t

In January, Facebook announced that it would be changing its feed algorithm to promote users’ well-being over time spent browsing content. That’s a relatively new approach for a company whose ethos once centered around “move fast, break things.”

It wasn’t all that long ago (approximately a year and a half before the algorithm change) that Facebook VP Andrew “Boz” Bosworth, published an internal memo called “The Ugly,” which was circulated throughout the company. In it, Boz made it clear to employees that connecting people (i.e. growth) is the main focus at Facebook, at all costs.

Buzzfeed first published the memo, which said:

Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

He goes on:

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

Facebook launched in 2004 and ushered in a honeymoon period for users. We reveled in uploading photos from our digital cameras and sharing them with friends. We cared about each and every notification. We shared our status. We played Farmville. We diligently curated our Likes.

But the honeymoon is over. Facebook grew to 1 billion active users in 2012. The social network now has over 2 billion active users. A growing number of people get their news from social media. The size and scope of Facebook is simply overwhelming.

And we’ve been well aware, as users and outsiders looking in on the network, that just like any other tool, Facebook can be used for evil.

But there was still some question whether or not Facebook leadership understood that principle, and if they did, whether or not they actually cared.

For a long time, perhaps too long, Facebook adhered to the “Move fast, break things” mentality. And things have certainly been broken, from fake news circulated during the 2016 Presidential election to the improper use of user data by third-party developers and Cambridge Analytica. And that’s likely the tip of the iceberg.

The memo was written long before the shit hit the fan for Facebook. It was published following the broadcast of Antonio Perkins’ murder on Facebook. This was back when Facebook was still insisting that it isn’t a media company, that it is simply a set of pipes through which people can ship off their content.

What is so shocking about the memo is that it confirms some of our deepest fears. A social network, with a population greater than any single country, is solely focused on growth over the well-being of the society it’s built. That the ends, to be a product everyone uses, might justify the means.

Facebook has tried to move away from this persona, however gently. In late 2016, Zuckerberg finally budged on the idea that Facebook is a media company, clarifying that it’s not a traditional media company. Last year, the company launched the Journalism Project in response to the scary growth of fake news on the platform. Zuckerberg even posted full-page print ads seeking patience and forgiveness in the wake of this most recent Cambridge Analytica scandal.

While that all seems like more of a public relations response than actionable change, it’s better than the stoic, inflexible silence of before.

After Buzzfeed published the memo, Boz and Zuckerberg both responded.

Boz said it was all about spurring internal debate to help shape future tools.

Zuck had this to say:

Boz is a talented leader who says many provocative things. This was one that most people at Facebook including myself disagreed with strongly. We’ve never believed the ends justify the means.

We recognize that connecting people isn’t enough by itself. We also need to work to bring people closer together. We changed our whole mission and company focus to reflect this last year.

If Boz wrote this memo to spark debate, it’s hard to discern whether that debate led to real change.

The memo has since been deleted, but you can read the full text below:

The Ugly

We talk about the good and the bad of our work often. I want to talk about the ugly.

We connect people.

That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.

So we connect more people

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.

I know a lot of people don’t want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that’s why we get to do that great work. We do have great products but we still wouldn’t be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.

In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren’t losing out on a bigger picture. But connecting people. That’s our imperative. Because that’s what we do. We connect people.



from Social – TechCrunch https://ift.tt/2E7sX4Y
via IFTTT

NASA Preps Exoplanet Hunter for April Mission


NASA is preparing TESS the planet hunter for launch next month. On April 16, the Transiting Exoplanet Survey Satellite (TESS) will begin its search for undiscovered worlds orbiting nearby stars. Dispatched aboard a […]

The post NASA Preps Exoplanet Hunter for April Mission appeared first on Geek.com.



from Geek.com https://ift.tt/2E87Wam
via IFTTT

MovieBob Reviews: UNSANE (2018)


Is Unsane good? Yes. What’s it about? Directed by Steven Soderbergh, the film stars Claire Foy as a young career woman making a free start in a small town after a frightening experience […]

The post MovieBob Reviews: UNSANE (2018) appeared first on Geek.com.



from Geek.com https://ift.tt/2Ih7SYm
via IFTTT

Elon Musk’s Neuralink Wanted an Animal Testing Lab in San Francisco


Zany billionaire Elon Musk has been obsessed with futurism for a while now. And not content with space, cars, public transportation, and the power grid, last year he set his sights on brain-computer […]

The post Elon Musk’s Neuralink Wanted an Animal Testing Lab in San Francisco appeared first on Geek.com.



from Geek.com https://ift.tt/2IiJPZ1
via IFTTT

One Punch Man Season 2 Sees A Character Return and A New Event

One Punch Man Volume 1

One Punch Man first made its debut a while back, nearly two years ago now. It’s been a long time since we’ve gotten to see the One Punch Man himself, Saitama, in action […]

The post One Punch Man Season 2 Sees A Character Return and A New Event appeared first on Geek.com.



from Geek.com https://ift.tt/2IhtOm6
via IFTTT

Facebook tries to prove it cares with “Fighting Abuse @ Scale” conference

Desperate to show it takes thwarting misinformation, fraud, and spam seriously, Facebook just announced a last-minute “Fighting Abuse @Scale” conference in San Francisco on April 25th. Speakers from Facebook, Airbnb, Google, Microsoft, and LinkedIn will discuss how to stop fake news, prevent counterfeit account creation, using honeypots to disrupt adversarial infrastructure, and how machine learning can be employed to boost platform safety.

Fighting Abuse @Scale will be held at the Bespoke Event Center within the Westfield Mall in SF. We can expect more technical details about the new proactive artificial intelligence tools Facebook announced today during a conference call about its plans to protect election integrity. The first session is titled “Combating misinformation at Facebook” and will feature an engineering director and data scientists from the company.

Facebook previously held “Fighting Spam @Scale” conferences in May 2015 and November 2016 just after the Presidential election. But since then, public frustration has built up to a breaking point for the social network. Russian election interference, hoaxes reaching voters, violence on Facebook Live, the ever-present issue of cyberbullying, and now the Cambridge Analytica data privacy scandal have created a convergence of backlash. With its share price plummeting, former executives speaking out against its impact on society, and CEO Mark Zuckerberg on a media apology tour, Facebook needs to show this isn’t just a PR problem. It needs users, potential government regulators, and its own existing and potential employees to see it’s willing to step up and take responsibility for fixing its platform.



from Social – TechCrunch https://tcrn.ch/2Gkhrcu
via IFTTT

No Man’s Sky Comes to Xbox, Gets Huge Update This Summer

No Man's Sky NEXT

No Man’s Sky didn’t exactly go over well with gamers and critics when it launched in the summer of 2016. This is due to the game’s lack of content and Hello Games’ misleading […]

The post No Man’s Sky Comes to Xbox, Gets Huge Update This Summer appeared first on Geek.com.



from Geek.com http://bit.ly/2pP1G2r
via IFTTT

Geek Deals Roundup: $100 Bonus Gift Card with the Xbox One X, $150 Dyson V6 Cordless Handheld, and more

Geek 0329

Ready to make the jump to a 4K gaming experience? Right now, you can get a $100 eGift card and a copy of PUBG when you buy an Xbox One X from Dell. […]

The post Geek Deals Roundup: $100 Bonus Gift Card with the Xbox One X, $150 Dyson V6 Cordless Handheld, and more appeared first on Geek.com.



from Geek.com http://bit.ly/2J4FtpD
via IFTTT

Riverdale Rescues Cheryl and It’s Mob Story


Maybe I’ve just accepted that Archie is perpetually a frustrating character on Riverdale. Maybe that makes his constant bad decisions and dopey lines easier to tolerate. It’s also possible the show has realized it’s […]

The post Riverdale Rescues Cheryl and It’s Mob Story appeared first on Geek.com.



from Geek.com http://bit.ly/2GV20Vb
via IFTTT

Marvel Comics is Finally Bringing Back the Fantastic Four

Fantastic Four Dan Slott

Call it a comeback! Reed Richards, Susan Storm, Johnny Storm, and Benjamin J. Grimm (along with the Richards children) will be reunited this August in a new ongoing Fantastic Four series. Marvel Comics […]

The post Marvel Comics is Finally Bringing Back the Fantastic Four appeared first on Geek.com.



from Geek.com https://ift.tt/2pLMsMs
via IFTTT

China Uses Facial Recognition To Fine And Publicly Shame Jaywalkers


Not everyone thinks jaywalking is a big deal. In a bustling city of over 23 million people, however, it can cause serious problems… and Chinese authorities are cracking down in a high-tech way. […]

The post China Uses Facial Recognition To Fine And Publicly Shame Jaywalkers appeared first on Geek.com.



from Geek.com https://ift.tt/2GpvkCw
via IFTTT

The Hosts Have a New Complaint in the New Westworld Season 2 Trailer


We had to wait more than a year and a half in between seasons, but so far, it’s looking worth it. The second trailer for the next installment in HBO’s sci-fi series dropped […]

The post The Hosts Have a New Complaint in the New Westworld Season 2 Trailer appeared first on Geek.com.



from Geek.com https://ift.tt/2GE9TRd
via IFTTT

Instagram reenables GIF sharing after GIPHY promises no more racism

A racial slur GIF slipped into GIPHY’s sticker library earlier this month, prompting Instagram and Snapchat to drop their GIPHY integrations. Now Instagram is reactivating after GIPHY confirmed its reviewed its GIF library four times and will preemptively review any new GIFs it adds. Snapchat said it had nothing to share right now about whether it’s going to reactivate GIPHY.

“We’ve been in close contact with GIPHY throughout this process and we’re confident that they have put measures in place to ensure that Instagram users have a good experience” an Instagram spokesperson told TechCrunch. GIPHY told TechCrunch in a statement that “To anyone who was affected: we’re sorry. We take full responsibility for this recent event and under no circumstances does
GIPHY condone or support this kind of content . . . We have also finished a full investigation into our content moderations systems and processes and have made specific changes to our process to ensure soemthing like this does not happen again.”

We first reported Instagram was building a GIPHY integration back in January before it launched a week later, with Snapchat adding a similar feature in February. But it wasn’t long before things went wrong. First spotted by a user in the U.K. around March 8th, the GIF included a racial slur. We’ve shared a censored version of the image below, but warning, it still includes graphic content that may be offensive to some users.

When asked, Snapchat told TechCrunch ““We have removed GIPHY from our application until we can be assured that this will never happen again.” Instagram wasn’t aware that the racist GIF was available in its GIPHY integration until informed by TechCrunch, leading to a shut down of the feature within an hour. An Instagram spokesperson told TechCrunch “This type of content has no place on Instagram.” After 12 hours of silence, GIPHY responded the next morning, telling us “After investigation of the incident, this sticker was available due to a bug in our content moderation filters specifically affecting GIF stickers.”

The fiasco highlights the risks of major platforms working with third-party developers to brings outside and crowdsourced content into their apps. Snapchat historically resisted working with established developers, but recently has struck more partnerships particularly around augmented reality lenses and marketing service providers. While it’s an easy way to provide more entertainment and creative expression tools, developer integrations also force companies to rely on the quality and safety of things they don’t fully control. As Instagram and Snapchat race for users around the world, they’ll have to weigh the risks and rewards of letting developers into their gardens.

GIPHY’s full statement is below.

CHANGES TO GIPHY’S STICKER MODERATION
Before we get into the details, we wanted to take a moment and sincerely apologize for the
deeply offensive sticker discovered by a user on March 8, 2018. To anyone who was affected:
we’re sorry. We take full responsibility for this recent event and under no circumstances does
GIPHY condone or support this kind of content.
The content was immediately removed and after investigation a bug was found in our content
moderation filters affecting stickers. This bug was immediately fixed and all stickers were re-
moderated.
We have also finished a full investigation into our content moderation systems and processes
and have made specific changes to our process to ensure something like this does not happen
again.

THE CHANGES
After fixing the bug in our content moderation filters and confirming that the sticker was
successfully detected, we re-moderated our entire sticker library 4x.
We have also added another level of GIPHY moderation before each sticker is approved into
the library. This is now a permanent addition to our moderation process.
We hope this will ensure that GIPHY stickers will always be fun and safe no matter where you
see them.

THE FUTURE AND BEYOND
GIFs and Stickers are supposed to make the Internet a better, more entertaining place.
GIPHY is committed to making sure that’s always the case. As GIPHY continues to grow, we’re
going to continue looking for ways to improve our user experience. Please let us know how we
can help at: support@giphy.com.
Team Giphy.

 



from Social – TechCrunch https://ift.tt/2E5Cv0o
via IFTTT

Facebook starts fact checking photos/videos, blocks millions of fake accounts per day

Facebook has begun letting partners fact check photos and videos beyond news articles, and proactively review stories before Facebook asks them. Facebook is also now preemptively blocking the creation of millions of fake accounts per day. Facebook revealed this news on a conference call with journalists [Update: and later a blog post] about its efforts around election integrity that included Chief Security Officer Alex Stamos who’s reportedly leaving Facebook later this year but claims he’s still committed to the company.

Articles flagged as false by Facebook’s fact checking partners have their reach reduced and display Related Articles showing perspectives from reputable news oulets below

Stamos outlined how Facebook is building ways to address fake identities, fake audiences grown illicitly or pumped up to make content appear more popular, acts of spreading false information, and false narratives that are intentionally deceptive and shape people’s views beyond the facts. “We’re trying to develop a systematic and comprehensive approach to tackle these challenges, and then to map that approach to the needs of each country or election” says Stamos.

Samidh Chakrabarti, Facebook’s product manager for civic engagement also explained that Facebook is now proactively looking for foreign-based Pages producing civic-related content inauthentically. It removes them from the platform if a manual review by the security team finds they violate terms of service.

“This proactive approach has allowed us to move more quickly and has become a really important way for us to prevent divisive or misleading memes from going viral” said Chakrabarti. Facebook first piloted this tool in the Alabama special election, but has now deployed it to protect Italian elections and will use it for the U.S. mid-term elections.

Meanwhile, advances in machine learning have allowed Facebook “to find more suspicious behaviors without assessing the content itself” to block millions of fake account creations per day “before they can do any harm”, says Chakrabarti.

Facebook implemented its first slew of election protections back in December 2016, including working with third-party fact checkers to flag articles as false. But those red flags were shown to entrench some people’s belief in false stories, leading Facebook to shift to showing Related Articles with perspectives from other reputable news outlets. As of yesterday, Facebook’s fact checking partners began reviewing suspicious photos and videos which can also spread false information. This could reduce the spread of false news image memes that live on Facebook and require no extra clicks to view, like doctored photos showing the Parkland school shooting survivor Emma González ripping up the constitution.

Normally, Facebook sends fact checkers stories that are being flagged by users and going viral. But now in countries like Italy and Mexico in anticipation of elections, Facebook has enabled fact checkers to proactively flag things because in some cases they can identify false stories that are spreading before Facebook’s own systems. “To reduce latency in advance of elections, we wanted to ensure we gave fact checkers that ability” says Facebook’s News Feed product manager Tessa Lyons.

A photo of Parkland shooting survivor Emma González ripping up a shooting range target was falsly doctored to show her ripping up the constitution. Photo fact checking could help Facebook prevent the false image from going viral. [Image via CNN]

With the mid-terms coming up quick, Facebook has to both secure its systems against election interference, as well as convince users and regulators that it’s made real progress since the 2016 presidential election where Russian meddlers ran rampant. Otherwise, Facebook risks another endless news cycle about it being a detriment to democracy that could trigger reduced user engagement and government intervention.

from Social – TechCrunch https://ift.tt/2usDecF
via IFTTT

Throwing Salt Into the Atmosphere Might Help With Climate Change


A group of scientists has suggested that dumping tremendous amounts of salt into the upper atmosphere could help lessen the effects of climate change. Quite similar to the sulfur dioxide plan we wrote […]

The post Throwing Salt Into the Atmosphere Might Help With Climate Change appeared first on Geek.com.



from Geek.com https://ift.tt/2pUpgLh
via IFTTT

Twitter makes it easier to share the right part of a live video with launch of ‘Timestamps’

Twitter today is introducing a new feature that will make it easier to share a key moment from a live video, so those viewing the tweet don’t have to scroll to the part of the broadcast you want to talk about. The feature, called “Timestamps,” is something Twitter says it built in response to existing user behavior on Twitter.

Before, users could only tweet an entire live video. So, if they wanted to highlight a particular segment, they would tweet the video along with the specific time in the video where the part they’re trying to share begins.

Those viewing the tweet would then have to scroll through the video to the correct time, which can be cumbersome on longer broadcasts and challenging on slower connections.

For instance:

The new Timestamps feature makes this whole process simpler. Now, when you tap to share a live video (or a replay of a live video), you’re able to scroll back to the exact time you want the audience to watch. You can then add your own thoughts to the tweet, and post it as usual.

But anyone seeing the tweet will start watching right at the time you specified.

If the video is still live, they’ll then be able to skip to what’s happening now by clicking the “live” button, or they can scroll back and forward in the video as they choose.

The new option ties in well with Twitter’s live streaming efforts, which has seen the company focused on offering live-streamed sporting events, news broadcasts, and other events.

For example, those live-streaming a sports match could re-share the same live video broadcast every time the team scores a goal, with the video already positioned to the right part of the broadcast to capture that action. That could increase the video’s number of viewers, which could then translate to better advertising potential for those live streams.

However, Twitter will not allow advertisers to place their ads against the Timestamped moments at launch, because they don’t want to get into a situation where an advertiser is positioned up against a moment that’s not considered ‘brand-safe.’

Beyond the sports-focused use cases, people could also take advantage of Timestamps to share their favorite song from a live-streamed concert, while reporters could highlight something important said during a press conference.

Twitter notes the Timestamps feature will be available to anyone – not just professional content publishers. It will also work for anyone doing a broadcast from their phone, and will support live videos both on Twitter and Periscope.

On Twitter, you’ll be able to share the live video as a tweet, while on Periscope you’re  able to share to your Periscope followers, in addition to sharing to Twitter or sharing as a link.

Timestamps isn’t the first feature Twitter built by watching how people were using its product. The company has a long history of adapting its product to consumer behavior as it did with the previous launches of @ replies, the hashtag, retweets and, most recently, threads. 

The update that delivers support for Timestamps is rolling out today on Twitter for Android and iOS, Twitter.com and Periscope.

 



from Social – TechCrunch https://ift.tt/2GGPACP
via IFTTT

Here’s Cambridge Analytica’s plan for voters’ Facebook data

More details have emerged about how Facebook data on millions of US voters was handled after it was obtained in 2014 by UK political consultancy Cambridge Analytica for building psychographic profiles of Americans to target election messages for the Trump campaign.

The dataset — of more than 50M Facebook users — is at the center of a scandal that’s been engulfing the social network giant since newspaper revelations published on March 17 dropped privacy and data protection into the top of the news agenda.

A UK parliamentary committee has published a cache of documents provided to it by an ex CA employee, Chris Wylie, who gave public testimony in front of the committee at an oral hearing earlier this week. During that hearing he said he believes data on “substantially” more than 50M Facebookers was obtained by CA. Facebook has not commented publicly on that claim.

Among the documents the committee has published today (with some redactions) is the data-licensing contract between Global Science Research (GSR) — the company set up by the Cambridge University professor, Aleksandr Kogan, whose personality test app was used by CA as the vehicle for gathering Facebook users’ data — and SCL Elections (an affiliate of CA), dated June 4, 2014.

The document is signed by Kogan and CA’s now suspended CEO, Alexander Nix.

The contract stipulates that all monies transferred to GSR will be used for obtaining and processing the data for the project — “to further develop, add to, refine and supplement GS psychometric scoring algorithms, databases and scores” — and none of the money paid Kogan should be spent on other business purposes, such as salaries or office space “unless otherwise approved by SCL”.

Wylie told the committee on Tuesday that CA chose to work with Kogan as he had agreed to work with them on acquiring and modeling the data first, without fixing commercial terms up front.

The contact also stipulates that Kogan’s company must gain “advanced written approval” from SCL to cover costs not associated with collecting the data — including “IT security”.

Which does rather underline CA’s priorities in this project: Obtain, as fast as possible, lots of personal data on US voters, but don’t worry much about keeping that personal information safe. Security is a backburner consideration in this contract.

CA responded to Wylie’s testimony on Tuesday with a statement rejecting his allegations — including claiming it “does not hold any GSR data or any data derived from GSR data”.

The company has not updated its press page with any new statement in light of the publication of a 2014 contract signed by its former CEO and GSR’s Kogan.

Earlier this week the committee confirmed that Nix has accepted its summons to return to give further evidence — saying the public session will likely to take place on April 17.

Voter modeling across 11 US States

The first section of the contract between the CA affiliate company and GSR briefly describes the purpose of the project as being to conduct “political modeling” of the population in 11 US states.

On the data protection front, the contract includes a clause stating that both parties “warrant and undertake” to comply with all relevant privacy and data handling laws.

“Each of the parties warrants and undertakes that it will not knowingly do anything or permit anything to be done which might lead to a breach of any such legislation, regulations and/or directives by the other party,” it also states.

CA remains under investigation by the UK’s data protection watchdog, which obtained a warrant to enter its offices last week — and spent several hours gathering evidence. The company’s activities are being looked at as part of a wider investigation by the ICO into the use of data analytics for political purposes.

Commissioner Elizabeth Denham has previously said she’s leading towards recommending a code of conduct for use of social media for political campaigning — and said she hopes to publish her report by May.

Another clause in the contract between GSR and SCL specifies that Kogan’s company will “seek out informed consent of the seed user engaging with GS Technology” — which would presumably refer to the ~270,000 people who agreed to take the personality quiz in the app deployed via Facebook’s platform.

Upon completion of the project, the contract specifies that Kogan’s company may continue to make use of SCL data for “academic research where no financial gain is made”.

Another clause details an additional research boon that would be triggered if Kogan was able to meet performance targets and deliver SCL with 2.1M matched records in the 11 US states it was targeting — so long as he met its minimum quality standards and at an averaged cost of $0.50 or less per matched record. In that event, he stood to also receive an SCL dataset of around 1M residents of Trinidad and Tobago — also “for use in academic research”.

The second section of the contract explains the project and its specification in detail.

Here it states that the aim of the project is “to infer psychological profiles”, using self-reported personality test data, political party preference and “moral value data”.

The 11 US states targeted by the project are also named as: Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North Carolina, Oregon, South Carolina and West Virginia.

The project is detailed in the contract as a seven step process — with Kogan’s company, GSR, generating an initial seed sample (though it does not specify how large this is here) using “online panels”; analyzing this seed training data using its own “psychometric inventories” to try to determine personality categories; the next step is Kogan’s personality quiz app being deployed on Facebook to gather the full dataset from respondents and also to scrape a subset of data from their Facebook friends (here it notes: “upon consent of the respondent, the GS Technology scrapes and retains the respondent’s Facebook profile and a quantity of data on that respondent’s Facebook friends”); step 4 involves the psychometric data from the seed sample, plus the Facebook profile data and friend data all being run through proprietary modeling algorithms — which the contract specifies are based on using Facebook likes to predict personality scores, with the stated aim of predicting the “psychological, dispositional and/or attitudinal facets of each Facebook record”; this then generates a series of scores per Facebook profile; step 6 is to match these psychometrically scored profiles with voter record data held by SCL — with the goal of matching (and thus scoring) at least 2M voter records for targeting voters across the 11 states; the final step is for matched records to be returned to SCL, which would then be in a position to craft messages to voters based on their modeled psychometric scores.

The “ultimate aim” of the psychometric profiling product Kogan built off of the training and Facebook data sets is imagined as “a ‘gold standard’ of understanding personality from Facebook profile information, much like charting a course to sail”.

The possibility for errors is noted briefly in the document but it adds: “Sampling in this phase [phase 1 training set] will be repeated until assumptions and distributions are met.”

In a later section, on demographic distribution analysis, the contract mentions the possibility for additional “targeted data collection procedures through multiple platforms” to be used — even including “brief phone scripts with single-trait questions” — in order to correct any skews that might be found once the Facebook data is matched with voter databases in each state, (and assuming any “data gaps” could not be “filled in from targeted online samples”, as it also puts it).

In a section on “background and rational”, the contract states that Kogan’s models have been “validity tested” on users who were not part of the training sample, and further claims: “Trait predictions based on Facebook likes are at near test-rest levels and have been compared to the predictions their romantic partners, family members, and friends make about their traits”.

“In all the previous cases, the computer-generated scores performed the best. Thus, the computer-generated scores can be more accurate than even the knowledge of very close friends and family members,” it adds.

His technology is described as “different from most social research measurement instruments” in that it is not solely based on self-reported data — with the follow-on claim being made that: “Using observed data from Facebook users’ profiles makes GS’ measurements genuinely behavioral.”

That suggestion, at least, seems fairly tenuous — given that a portion of Facebook users are undoubtedly aware that the site is tracking their activity when they use it, which in turn is likely to affect how they use Facebook.

So the idea that Facebook usage is a 100% naked reflection of personality deserves far more critical questioning than Kogan’s description of it in the contract with SCL.

And, indeed, some of the commentary around this news story has queried the value of the entire exposé by suggesting CA’s psychometric targeting wasn’t very effective — ergo, it may not have had a significant impact on the US election.

In contrast to claims being made for his technology in the 2014 contract, Kogan himself claimed in a TV interview earlier this month (after the scandal broke) that his predictive modeling was not very accurate at an individual level — suggesting it would only be useful in aggregate to, for example, “understand the personality of New Yorkers”.

Yesterday Channel 4 News reported that it had been able to obtain some of the data Kogan modeled for CA — supporting Wylie’s testimony that CA had not locked down access to the data.

In its report, the broadcaster spoke to some of the named US voters in Colorado — showing them the scores Kogan’s models had given them. Unsurprisingly, not all their interviewees thought the scores were an accurate reflection of who they were.

However regardless of how effective (or not) Kogan’s methods were, the bald fact that personal information on 50M+ Facebook users was so easily sucked out of the platform is of unquestionable public interest and concern.

The added fact this data set was used for psychological modeling for political message targeting purposes without people’s knowledge or consent just further underlines the controversy. Whether the political microtargeting method worked well or was hit and miss is really by the by.

In the contract, Kogan’s psychological profiling methods are described as “less costly, more detailed, and more quickly collected” than other individual profiling methods, such as “standard political polling or phone samples”.

The contract also flags up how the window of opportunity for his approach was closing — at least on Facebook’s platform. “GS’s method relies on a pre-existing application functioning under Facebook’s old terms of service,” it observes. “New applications are not able to access friend networks and no other psychometric profiling applications exist under the old Facebook terms.”

As I wrote last weekend, Facebook faced a legal challenge to the lax system of app permissions it operated in 2011. And after a data protection audit and re-audit by the Irish Data Protection Commissioner, in 2011 and 2012, the regulator recommended it shutter developers’ access to friend networks — which Facebook finally did (for both old and new apps) as of mid 2015.

But in mid 2014 existing developers on its platform could still access the data — as Kogan was able to, handing it off to SCL and its affiliates.

Other documents published by the committee today include a contract between Aggregate IQ — a Canadian data company which Wylie described to the committee as ‘CA Canada’ (aka yet another affiliate of CA/SCL, and SCL Elections).

This contract, which is dated September 15, 2014, is for the: “Design and development of an Engagement Platform System”, also referred to as “the Ripon Platform”, and described as: “A scalable engagement platform that leverages the strength of SCLs modelling data, providing an actionable toolset and dashboard interface for the target campaigns in the 2014 election cycle. This will consist of a bespoke engagement platform (SCL Engage) to help make SCLs behavioural microtargeting data actionable while making campaigns more accountable to donors and supporter”.

Another contract between Aggregate IQ and SCL is dated November 25, 2013, and covers the delivery of a CRM system, a website and “the acquisition of online data” for a political party in Trinidad and Tobago. In this contract a section on “behavioral data acquisition” details their intentions thus:

  • Identify and obtain qualified sources of data that illustrate user behaviour and contribute to the development of psychographic profiling in the region

  • This data may include, but is not limited to:

    • Internet Service Provider (ISP) log files

    • First party data logs

    • Third party data logs

    • Ad network data

    • Social bookmarking

    • Social media sharing (Twitter, FB, MySpace)

    • Natural Language Processing (NLP) of URL text and images

    • Reconciliation of IP and User-Agent to home address, census tract, or dissemination area

In his evidence to the committee on Tuesday Wylie described the AIQ Trinidad project as a “pre-cursor to the Rippon project to see how much data could be pulled and could we profile different attributes in people”.

He also alleged AIQ has used hacker type techniques to obtain data. “AIQ’s role was to go and find data,” he told the committee. “The contracting is pulling ISP data and there’s also emails that I’ve passed on to the committee where AIQ is working with SCL to find ways to pull and then de-anonymize ISP data. So, like, raw browsing data.”

Another document in the bundle published today details a project pitch by SCL to carry out $200,000 worth of microtargeting and political campaign work for the conservative organization ForAmerica.org — for “audience building and supporter mobilization campaigns”.

There is also an internal SCL email chain regarding a political targeting project that also appears to involve the Kogan modeled Facebook data, which is referred to as the “Bolton project” (which seems to refer to work done for the now US national security advisor, John Bolton) — with some back and forth over concerns about delays and problems with data matching in some of the US states and overall data quality.

“Need to present the little information we have on the 6,000 seeders to [sic] we have to give a rough and ready and very preliminary reading on that sample ([name redacted] will have to ensure the appropriate disclaimers are in place to manage their expectations and the likelihood that the results will change once more data is received). We need to keep the client happy,” is one of the suggested next steps in an email written by an unidentified SCL staffer working on the Bolton project.

“The Ambassador’s team made it clear that he would want some kind of response on the last round of foreign policy questions. Though not ideal, we will simply piss off a man who is potentially an even bigger client if we remain silent on this because it has been clear to us this is something he is particularly interested in,” the emailer also writes.

“At this juncture, we unfortunately don’t have the luxury of only providing the perfect data set but must deliver something which shows the validity of what we have been promising we can do,” the emailer adds.

Another document is a confidential memorandum prepared for Rebekah Mercer (the daughter of US billionaire Robert Mercer; Wylie has said Mercer provided the funding to set up CA), former Trump advisor Steve Bannon and the (now suspended) CA CEO Alexander Nix advising them on the legality of a foreign corporation (i.e. CA), and foreign nationals (such as Nix and others), carrying out work on US political campaigns.

This memo also details the legal structure of SCL and CA — the former being described as a “minority owner” of CA. It notes:

With this background we must look first at Cambridge Analytica, LLC (“Cambridge”) and then at the people involved and the contemplated tasks. As I understand it, Cambridge is a Delaware Limited Liability Company that was formed in June of 2014. It is operated through 5 managers, three preferred managers, Ms. Rebekah Mercer, Ms. Jennifer Mercer and Mr. Stephen Bannon, and two common managers, Mr. Alexander Nix and a person to be named. The three preferred managers are all United States citizens, Mr. Nix is not. Cambridge is primarily owned and controlled by US citizens, with SCL Elections Ltd., (“SCL”) a UK limited company being a minority owner. Moreover, certain intellectual property of SCL was licensed to Cambridge, which intellectual property Cambridge could use in its work as a US company in US elections, or other activities.

On the salient legal advice point, the memo concludes that US laws prohibiting foreign nationals managing campaigns — “including making direct or indirect decisions regarding the expenditure of campaign dollars” — will have “a significant impact on how Cambridge hires staff and operates in the short term”.



from Social – TechCrunch https://ift.tt/2uv3piT
via IFTTT

Pacific Rim’s Robots Are Less Advanced Than Our Real Ones


I think we’re all at least passingly familiar with Pacific Rim. In case you aren’t, the first one is a super-dope movie from geek legend Guillermo Del Toro. And the second, according to […]

The post Pacific Rim’s Robots Are Less Advanced Than Our Real Ones appeared first on Geek.com.



from Geek.com https://ift.tt/2uwT7ic
via IFTTT