I'm a char salesmen. I share things about; Programming, SysOps, Civil Rights, education, and things that make me happy. And robots.
847 stories
·
13 followers

ithelpstodream: Here’s a MLK quote I’d love to see white people...

1 Share


ithelpstodream:

Here’s a MLK quote I’d love to see white people share.

Read the whole story
reconbot
18 hours ago
reply
New York City
Share this story
Delete

Crafting a high-performance TV user interface using React

1 Share

The Netflix TV interface is constantly evolving as we strive to figure out the best experience for our members. For example, after A/B testing, eye-tracking research, and customer feedback we recently rolled out video previews to help members make better decisions about what to watch. We’ve written before about how our TV application consists of an SDK installed natively on the device, a JavaScript application that can be updated at any time, and a rendering layer known as Gibbon. In this post we’ll highlight some of the strategies we’ve employed along the way to optimize our JavaScript application performance.

React-Gibbon

In 2015, we embarked on a wholesale rewrite and modernization of our TV UI architecture. We decided to use React because its one-way data flow and declarative approach to UI development make it easier to reason about our app. Obviously, we’d need our own flavor of React since at that time it only targeted the DOM. We were able to create a prototype that targeted Gibbon pretty quickly. This prototype eventually evolved into React-Gibbon and we began to work on building out our new React-based UI.

React-Gibbon’s API would be very familiar to anyone who has worked with React-DOM. The primary difference is that instead of divs, spans, inputs etc, we have a single “widget” drawing primitive that supports inline styling.

React.createClass({
   render() {
       return <Widget style={{ text: 'Hello World', textSize: 20 }} />;
   }
});

Performance is a key challenge

Our app runs on hundreds of different devices, from the latest game consoles like the PS4 Pro to budget consumer electronics devices with limited memory and processing power. The low-end machines we target can often have sub-GHz single core CPUs, low memory and limited graphics acceleration. To make things even more challenging, our JavaScript environment is an older non-JIT version of JavaScriptCore. These restrictions make super responsive 60fps experiences especially tricky and drive many of the differences between React-Gibbon and React-DOM.

Measure, measure, measure

When approaching performance optimization it’s important to first identify the metrics you will use to measure the success of your efforts. We use the following metrics to gauge overall application performance:

  • Key Input Responsiveness - the time taken to render a change in response to a key press
  • Time To Interactivity - the time to start up the app
  • Frames Per Second - the consistency and smoothness of our animations
  • Memory Usage


The strategies outlined below are primarily aimed at improving key input responsiveness. They were all identified, tested and measured on our devices and are not necessarily applicable in other environments. As with all “best practice” suggestions it is important to be skeptical and verify that they work in your environment, and for your use case. We started off by using profiling tools to identify what code paths were executing and what their share of the total render time was; this lead us to some interesting observations.

Observation: React.createElement has a cost

When Babel transpiles JSX it converts it into a number of React.createElement function calls which when evaluated produce a description of the next Component to render. If we can predict what the createElement function will produce, we can inline the call with the expected result at build time rather than at runtime.


// JSX
render() {
   return <MyComponent key='mykey' prop1='foo' prop2='bar' />;
}

// Transpiled
render() {
   return React.createElement(MyComponent, { key: 'mykey', prop1: 'foo', prop2: 'bar' });
}

// With inlining
render() {
   return {
       type: MyComponent,
       props: {
           prop1: 'foo',
           prop2: 'bar'
       },
       key: 'mykey'
   };
}


As you can see we have removed the cost of the createElement call completely, a triumph for the “can we just not?” school of software optimization.

We wondered whether it would be possible to apply this technique across our whole application and avoid calling createElement entirely. What we found was that if we used a ref on our elements, createElement needs to be called in order to hook up the owner at runtime. This also applies if you’re using the spread operator which may contain a ref value (we’ll come back to this later).

We use a custom Babel plugin for element inlining, but there is an official plugin that you can use right now. Rather than an object literal, the official plugin will emit a call to a helper function that is likely to disappear thanks to the magic of V8 function inlining. After applying our plugin there were still quite a few components that weren’t being inlined, specifically Higher-order Components which make up a decent share of the total components being rendered in our app.

Problem: Higher-order Components can’t use Inlining

We love Higher-order Components (HOCs) as an alternative to mixins. HOCs make it easy to layer on behavior while maintaining a separation of concerns. We wanted to take advantage of inlining in our HOCs, but we ran into an issue: HOCs usually act as a pass-through for their props. This naturally leads to the use of the spread operator, which prevents the Babel plug-in from being able to inline.

When we began the process of rewriting our app, we decided that all interactions with the rendering layer would go through declarative APIs. For example, instead of doing:

componentDidMount() {
   this.refs.someWidget.focus()
}

In order to move application focus to a particular Widget, we instead implemented a declarative focus API that allows us to describe what should be focused during render like so:

render() {
   return <Widget focused={true} />;
}

This had the fortunate side-effect of allowing us to avoid the use of refs throughout the application. As a result we were able to apply inlining regardless of whether the code used a spread or not.


// before inlining
render() {
   return <MyComponent {...this.props} />;
}

// after inlining
render() {
   return {
       type: MyComponent,
       props: this.props
   };
}
This greatly reduced the amount of function calls and property merging that we were previously having to do but it did not eliminate it completely.

Problem: Property interception still requires a merge

After we had managed to inline our components, our app was still spending a lot of time merging properties inside our HOCs. This was not surprising, as HOCs often intercept incoming props in order to add their own or change the value of a particular prop before forwarding on to the wrapped component.

We did analysis of how stacks of HOCs scaled with prop count and component depth on one of our devices and the results were informative.


Screenshot 2017-01-11 12.31.30.png


They showed that there is a roughly linear relationship between the number of props moving through the stack and the render time for a given component depth.

Death by a thousand props

Based on our findings we realized that we could improve the performance of our app substantially by limiting the number of props passed through the stack. We found that groups of props were often related and always changed at the same time. In these cases, it made sense to group those related props under a single “namespace” prop. If a namespace prop can be modeled as an immutable value, subsequent calls to shouldComponentUpdate calls can be optimized further by checking referential equality rather than doing a deep comparison. This gave us some good wins but eventually we found that we had reduced the prop count as much as was feasible. It was now time to resort to more extreme measures.

Merging props without key iteration

Warning, here be dragons! This is not recommended and most likely will break many things in weird and unexpected ways.

After reducing the props moving through our app we were experimenting with other ways to reduce the time spent merging props between HOCs. We realized that we could use the prototype chain to achieve the same goals while avoiding key iteration.


// before proto merge
render() {
   const newProps = Object.assign({}, this.props, { prop1: 'foo' })
   return <MyComponent {...newProps} />;
}

// after proto merge
render() {
   const newProps = { prop1: 'foo' };
   newProps.__proto__ = this.props;
   return {
       type: MyComponent,
       props: newProps
   };
}

In the example above we reduced the 100 depth 100 prop case from a render time of ~500ms to ~60ms. Be advised that using this approach introduced some interesting bugs, namely in the event that this.props is a frozen object . When this happens the prototype chain approach only works if the __proto__ is assigned after the newProps object is created. Needless to say, if you are not the owner of newProps it would not be wise to assign the prototype at all.

Problem: “Diffing” styles was slow

Once React knows the elements it needs to render it must then diff them with the previous values in order to determine the minimal changes that must be applied to the actual DOM elements. Through profiling we found that this process was costly, especially during mount - partly due to the need to iterate over a large number of style properties.

Separate out style props based on what’s likely to change

We found that often many of the style values we were setting were never actually changed. For example, say we have a Widget used to display some dynamic text value. It has the properties text, textSize, textWeight and textColor. The text property will change during the lifetime of this Widget but we want the remaining properties to stay the same. The cost of diffing the 4 widget style props is spent on each and every render. We can reduce this by separating out the things that could change from the things that don't.

const memoizedStylesObject = { textSize: 20, textWeight: ‘bold’, textColor: ‘blue’ };


<Widget staticStyle={memoizedStylesObject} style={{ text: this.props.text }} />

If we are careful to memoize the memoizedStylesObject object, React-Gibbon can then check for referential equality and only diff its values if that check proves false. This has no effect on the time it takes to mount the widget but pays off on every subsequent render.

Why not avoid the iteration all together?

Taking this idea further, if we know what style props are being set on a particular widget, we can write a function that does the same work without having to iterate over any keys. We wrote a custom Babel plugin that performed static analysis on component render methods. It determines which styles are going to be applied and builds a custom diff-and-apply function which is then attached to the widget props.


// This function is written by the static analysis plugin
function __update__(widget, nextProps, prevProps) {
   var style = nextProps.style,
       prev_style = prevProps && prevProps.style;


   if (prev_style) {
       var text = style.text;
       if (text !== prev_style.text) {
           widget.text = text;
       }
   } else {
       widget.text = style.text;
   }
}


React.createClass({
   render() {
       return (
           <Widget __update__={__update__} style={{ text: this.props.title }}  />
       );
   }
});


Internally React-Gibbon looks for the presence of the “special” __update__ prop and will skip the usual iteration over previous and next style props, instead applying the properties directly to the widget if they have changed. This had a huge impact on our render times at the cost of increasing the size of the distributable.

Performance is a feature

Our environment is unique, but the techniques we used to identify opportunities for performance improvements are not. We measured, tested and verified all of our changes on real devices. Those investigations led us to discover a common theme: key iteration was expensive. As a result we set out to identify merging in our application, and determine whether they could be optimized. Here’s a list of some of the other things we’ve done in our quest to improve performance:

  • Custom Composite Component - hyper optimized for our platform
  • Pre-mounting screens to improve perceived transition time
  • Component pooling in Lists
  • Memoization of expensive computations

Building a Netflix TV UI experience that can run on the variety of devices we support is a fun challenge. We nurture a performance-oriented culture on the team and are constantly trying to improve the experiences for everyone, whether they use the Xbox One S, a smart TV or a streaming stick. Come join us if that sounds like your jam!


Read the whole story
reconbot
4 days ago
reply
New York City
Share this story
Delete

In Hamburg, People Stare At You

2 Shares
Hamburg - Photo: Pedro | FlickrCC

Hamburg – Photo: Pedro | FlickrCC

In Hamburg, you have to pay attention.

Things are moving here on the edge of time. A store doesn’t open at 9:59, it opens at 10:00 exactly.  People still have wrist watches not as an accessory, not as a nod to the 90s, but actually use them for their intended purpose. The train / subway / HochBahn is rarely, if ever, late. Only once in six whole months did I see a train come startlingly late (I would consider “startling late” ten minutes), and it caused panic to ripple throughout the crowd. Where is the train? How could this happen? When the HochBahn comes, it opens and closes its mouth within ten seconds and you better already be there to jump in. One time I made it, but my bag got caught. The doors do not wait for anyone or anything. Every four minutes it arrives and leaves. Time is important here, efficiency is important. Hamburg people cannot live on borrowed time, and there is something to respect in this efficiency.

In Hamburg, people stare at you.

Truly, stoically, painfully, everlastingly look at you. Never have a cold sore in Hamburg. Make sure you brush your hair. The staring, the (perceived but often not real) judgement, and the curiosity only increases between the elderly and the children. You feel 15 and awkward again in Hamburg. Why is everyone looking at me? They are not, in fact; everyone is looking at everyone else. Over the months, the stares don’t feel so strange or threatening. You begin to realize that the staring, the looking-for-longer-than-necessary-look is how people interact with each other here. It is different, it is uncomfortable, it can be unbearable and then… it isn’t. Eventually you will stare back.

In Hamburg, fashion is function over form. There are not a lot of bright colors or varieties of clothing here. Sensible black shoes, a Fjällräven backpack, a jacket that hits your knees that is both waterproof and windproof for the unpredictable dark winter. In some cases, this is a relief. You don’t have to fuss much about your wardrobe in Hamburg. It is easy to wear greys and blacks and the occasional pop of color scarf. You bundle up here. The extreme humidity in the summer means you sweat, but in the winter it means there is a sharper cold. Looking put together, clean, sharp, business ready is very important, but fashion in itself is not. Maybe it is the climate or maybe it is the culture… I can’t tell the difference.

In Hamburg, the food also does not confuse or surprise you.

Curryworst - Photo: Lucas Richarz | FlickrCC

Curryworst – Photo: Lucas Richarz | FlickrCC

It does what it is supposed to do, it does not trick you with misleading imagery. You will see sugar pellets on the tables, not “natural looking” sugar to give you the facade of health. Many, many, cured meats in sausage form line the shelves of the supermarket. Powdered chicken broth. Certainly most of the U.S. food isn’t good for you but we make an illusion that it is. Why bother? says Hamburg. You know what you are getting yourself into.

In Hamburg, people here are reserved but real.

Not constantly warm and inviting but absolutely genuine. They do not mix words, they mean what they say and say what they mean. It is not easy to have a “passive aggressive” attitude with a German person. They will not accept it or frankly understand it. Because of this cultural quirk many German people pride themselves with, you tend to build character and resilience living here, and don’t get the luxury of having things sugarcoated with false niceties. It is easy to miss friendliness and openness, but you know where you stand with people here.

In Hamburg, it’s clean and shiny everywhere, minus the cigarette butts.

Americans underestimate how much people still smoke in Europe. When we were flooded with propaganda about the dangers of smoking, Europe was laughing with a cigarette in one hand and some red wine in the other. Smoking is not “cool” here, it is common. A stress relief. A social engagement. An average part of your day. I have seen a mother smoke while holding a baby. I have seen kids who look no more than 14 light up for an afternoon puff.

In Hamburg, you will not find a tourist city.

I really love Hamburg.

Hamburg

Venice is flooded with tourists. Berlin. London. These are the cities people want to flock to for the culture and the atmosphere, but not Hamburg. Hamburg instead is an economic wonderland. Rated number 14 in the world for one of the best cities to live in, Hamburg is thriving independent of tourism. It is a well-oiled machine. This place is a secure cocoon of prosperity, growth and problem solving while perhaps lacking in some spectacles that have tourists come flocking.

In Hamburg, people are tall and fit.

Not all, but quite a lot. You step off the plane and you are in the land of the giants. Testing my theory, the internet tells me that Germany is ranked the sixth-tallest nation, only to be overshadowed by literal neighboring European countries. These long and lean people are often into sports. Over and over I am asked, what kind of sports do you do? None, I say. And they believe I misunderstood the question. So many people here won the traditionally attractive gene pool award and would be considered extremely beautiful or handsome in the U.S. There are misplaced athletes and actors everywhere here.

In Hamburg, you better not have celiacs disease.

Forget everything I said about the food earlier. Throw it away. Rewind. I forgot perhaps the most important part of living here: The bread. Oh my God, the bread. German people have said that when they visit or move to another country one of the first things they miss is the their bread. Bakeries are everywhere, Dat Backhus, Brotgarten, Ditsch, Le Crobag (to name a few) all across the street from each other, all co-existing in some sort of heavenly, gluteny alternative world. How can everyone have the same consistent, fluffy, perfectly baked bread? How do they all have the same warm lighting to shine over the glistening loaves like newborn babies? It is no wonder everyone is into sports here. My belly gets bigger with delicious carbohydrates every day.

In Hamburg, your heart better be ready for a beating.

It isn’t a walk through the park (though they have some lovely parks here), it’s more like a jog through a crowded well-dressed city block. You have tall, intimidating buildings intermixed with pre-war ones. (Yes, that war.) You have these gothic canal ways in Hafen City, the alleyways of prostitution in the Reeperbahn, you have beautiful, rich, put together mansions on the waterfront, red tailed squirrels, so many jobs related to green energy, people helping you with your luggage up the (dare I SAY NEVER ENDING) stairs. Minimal homelessness. A hodgepodge of things that make a city vibrant and thriving and lonely and forgetful.

Loving Hamburg is not easy.

In fact, I don’t know if I would say Hamburg is lovable in the traditional sense of the word. However, particular captured moments of watching snow fall in November while riding a speeding train, helping an old woman down the stairs as she whispers gratitude in German, or me successfully ordering a pizza in a language other than English makes me grateful for my time in such a sensible, functional, puzzling and flourishing city.

This story was written by Sarah Miller to be shared on AirlineReporter. Sarah, originally from Seattle, WA,  is a freelance English teacher in Hamburg, Germany. She can often be found whining online, making YouTube videos reviewing children’s books, or taking day trips to fancy bakeries to eat her feelings. You can read more of her work via her blog Ham in Hamburg or on her Instagram

The post In Hamburg, People Stare At You appeared first on AirlineReporter.

Read the whole story
reconbot
5 days ago
reply
New York City
satadru
5 days ago
reply
New York, NY
Share this story
Delete

Finally Revealed: Cloudflare Has Been Fighting NSL for Years

1 Comment and 2 Shares
Comments

EFF Fights National Security Letter on Behalf of CloudflareWe’re happy to be able to announce that Cloudflare is the second courageous client in EFF’s long-running lawsuit challenging the government’s unconstitutional national security letter (NSL) authority. Cloudflare, a provider of web performance and security services, just published its new transparency report announcing it has been fighting the NSL statute since 2013.

Like EFF’s other client, CREDO, Cloudflare took a stand against the FBI’s use of unilateral, perpetual NSL gag orders that resulted in a secret court battle stretching several years and counting. The litigation—seeking a ruling that the NSL power is unconstitutional—continues, but we’re pleased that we can at long last publicly applaud Cloudflare for fighting on behalf of its customers. Now more than ever we need the technology community to stand with users in the courts. We hope others will follow Cloudflare’s example.

Late last Friday, the government filed a public notice with the U.S. Circuit Court of Appeals for the Ninth Circuit identifying Cloudflare as an NSL recipient and EFF’s client in the lawsuit. The notice explains that the FBI determined it no longer needed to gag Cloudflare in conjunction with an NSL issued in early 2013.

Under the USA FREEDOM Act of 2015, the FBI is required to periodically review outstanding NSLs and lift gag orders on its own accord if circumstances no longer support a need for secrecy. As we’ve seen, this periodic review process has recently resulted in some very selective transparency by the FBI, which has nearly complete control over the handful of NSL gags it retracts, not to mention the hundreds of thousands it leaves in place. Make no mistake: this process is irredeemably flawed. It fails to place on the FBI the burden of justifying NSL gag orders in a timely fashion to a neutral third party, namely a federal court. Nevertheless, Cloudflare’s fight demonstrates that it is not unreasonable to require the FBI to relinquish some of its customary secrecy in national security cases.

The revelation of Cloudflare’s participation in our lawsuit follows the identification of CREDO as EFF’s other client last November. In CREDO’s case, the district court found that the FBI had failed to justify the need for the gag orders connected to two NSLs also issued in 2013.

But EFF’s fight against NSLs is by no means over. Our consolidated lawsuits remain on appeal in the Ninth Circuit, where we continue to argue that the entire NSL scheme is unconstitutional. The First Amendment requires that any gag order imposed by the executive branch be quickly evaluated by a court and demands that the government meet a high burden of justifying the gag. The FBI’s desultory removal of its unilateral NSL gags comes nowhere close to satisfying this standard. Oral argument has been scheduled in San Francisco for the week of March 20; we look forward to making these arguments there and then.


Comments
Read the whole story
skorgu
6 days ago
reply
Good job cloudflare, good job.
reconbot
6 days ago
reply
New York City
Share this story
Delete

Law Enforcement Access to IoT Data

2 Shares

In the first of what will undoubtedly be a large number of battles between companies that make IoT devices and the police, Amazon is refusing to comply with a warrant demanding data on what its Echo device heard at a crime scene.

The particulars of the case are weird. Amazon's Echo does not constantly record; it only listens for its name. So it's unclear that there is any evidence to be turned over. But this general issue isn't going away. We are all under ubiquitous surveillance, but it is surveillance by the companies that control the Internet-connected devices in our lives. The rules by which police and intelligence agencies get access to that data will come under increasing pressure for change.

Related: A newscaster discussed Amazon's Echo on the news, causing devices in the same room as tuned-in televisions to order unwanted products. This year, the same technology is coming to LG appliances such as refrigerators.

Read the whole story
reconbot
6 days ago
reply
New York City
Share this story
Delete

Attributing the DNC Hacks to Russia

3 Shares

President Barack Obama's public accusation of Russia as the source of the hacks in the US presidential election and the leaking of sensitive e-mails through WikiLeaks and other sources has opened up a debate on what constitutes sufficient evidence to attribute an attack in cyberspace. The answer is both complicated and inherently tied up in political considerations.

The administration is balancing political considerations and the inherent secrecy of electronic espionage with the need to justify its actions to the public. These issues will continue to plague us as more international conflict plays out in cyberspace.

It's true that it's easy for an attacker to hide who he is in cyberspace. We are unable to identify particular pieces of hardware and software around the world positively. We can't verify the identity of someone sitting in front of a keyboard through computer data alone. Internet data packets don't come with return addresses, and it's easy for attackers to disguise their origins. For decades, hackers have used techniques such as jump hosts, VPNs, Tor and open relays to obscure their origin, and in many cases they work. I'm sure that many national intelligence agencies route their attacks through China, simply because everyone knows lots of attacks come from China.

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It's rarely just one thing, and you'll often hear the term "constellation of evidence" to describe how a particular attacker is identified. It's analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.

The University of Toronto-based organization Citizen Lab routinely attributes attacks against the computers of activists and dissidents to particular Third World governments. It took months to identify China as the source of the 2012 attacks against the New York Times. While it was uncontroversial to say that Russia was the source of a cyberattack against Estonia in 2007, no one knew if those attacks were authorized by the Russian government ­until the attackers explained themselves. And it was the Internet security company CrowdStrike, which first attributed the attacks against the Democratic National Committee to Russian intelligence agencies in June, based on multiple pieces of evidence gathered from its forensic investigation.

Attribution is easier if you are monitoring broad swaths of the Internet. This gives the National Security Agency a singular advantage in the attribution game. The problem, of course, is that the NSA doesn't want to publish what it knows.

Regardless of what the government knows and how it knows it, the decision of whether to make attribution evidence public is another matter. When Sony was attacked, many security experts­myself included­were skeptical of both the government's attribution claims and the flimsy evidence associated with it. I only became convinced when the New York Times ran a story about the government's attribution, which talked about both secret evidence inside the NSA and human intelligence assets inside North Korea. In contrast, when the Office of Personnel Management was breached in 2015, the US government decided not to accuse China publicly, either because it didn't want to escalate the political situation or because it didn't want to reveal any secret evidence.

The Obama administration has been more public about its evidence in the DNC case, but it has not been entirely public.

It's one thing for the government to know who attacked it. It's quite another for it to convince the public who attacked it. As attribution increasingly relies on secret evidence­ -- as it did with North Korea's attack of Sony in 2014 and almost certainly does regarding Russia and the previous election -- ­the government is going to have to face the choice of making previously secret evidence public and burning sources and methods, or keeping it secret and facing perfectly reasonable skepticism.

If the government is going to take public action against a cyberattack, it needs to make its evidence public. But releasing secret evidence might get people killed, and it would make any future confidentiality assurances we make to human sources completely non-credible. This problem isn't going away; secrecy helps the intelligence community, but it wounds our democracy.

The constellation of evidence attributing the attacks against the DNC, and subsequent release of information, is comprehensive. It's possible that there was more than one attack. It's possible that someone not associated with Russia leaked the information to WikiLeaks, although we have no idea where that someone else would have obtained the information. We know that the Russian actors who hacked the DNC­ -- both the FSB, Russia's principal security agency, and the GRU, Russia's military intelligence unit -- ­are also attacking other political networks around the world.

In the end, though, attribution comes down to whom you believe. When Citizen Lab writes a report outlining how a United Arab Emirates human rights defender was targeted with a cyberattack, we have no trouble believing that it was the UAE government. When Google identifies China as the source of attacks against Gmail users, we believe it just as easily.

Obama decided not to make the accusation public before the election so as not to be seen as influencing the election. Now, afterward, there are political implications in accepting that Russia hacked the DNC in an attempt to influence the US presidential election. But no amount of evidence can convince the unconvinceable.

The most important thing we can do right now is deter any country from trying this sort of thing in the future, and the political nature of the issue makes that harder. Right now, we've told the world that others can get away with manipulating our election process as long as they can keep their efforts secret until after one side wins. Obama has promised both secret retaliations and public ones. We need to hope they're enough.

This essay previously appeared on CNN.com.

EDITED TO ADD: The ODNI released a declassified report on the Russian attacks. Here's a New York Times article on the report.

And last week there were Senate hearings on this issue.

EDITED TO ADD: A Washington Post article talks about some of the intelligence behind the assessment.

Read the whole story
reconbot
7 days ago
reply
New York City
Share this story
Delete
Next Page of Stories