I'm a char salesmen. I share things about; Programming, SysOps, Civil Rights, education, and things that make me happy. And robots.
1380 stories
·
18 followers

#427: Inconvenient

1 Comment

Kindergarten was a hard transition year for Momo, and consequently for us. It was a year of lots of parent-teacher-administration meetings: Momo’s not listening in class, Momo’s leaving the classroom, Momo doesn’t do what the other kids are doing, Momo’s sensitive and has trouble getting a handle on her emotions. And then, on our end, there was the constant struggle to just keep our heads above water at home, between my long commute and Kev’s inflexible schedule and the endurance race of early dismissals and sick days, and how much we relied on daycare and babysitters to make it work.

It was chaos, but it was a chaos we thought we had a decent handle on – Momo was cheerful despite everything, and she loved her caregivers, and we kept the wheels on the jalopy. That’s what you do, when you have to – there’s no shame in doing whatever you need to do to survive in a system that puts so many obstacles in the way of families.

And then… I got laid off, and as we rounded into another school year and the prospect of finding childcare all over again, Kev and I decided to try it with me going back to freelancing and taking on more of the relentless domestic management that’d fallen by the wayside. And, after a painful summer adjustment period of being a full-time SAHM, things have gotten a lot calmer over here in Chez Baby. Momo gets picked up and dropped off at school by a parent – which means we’re actually talking directly to her teacher – and dinners get made, and shopping gets done, and the whole family is together again at 4PM every day instead of snatching only an hour together after I get home from work.

And… god, it super doesn’t make me feel great, but Momo is thriving this year. After a bit of an adjustment to the new school, she’s been a total angel, and her teachers have been working so hard with her to support her and integrate the way she learns into the classroom. It’s been really good. So when her teacher made an offhand comment about how it probably helps that I’ve been around more often, it’s…

…well, it’s hard, to not take it personally. To not look at the very obvious evidence in front of us that, while there’s absolutely nothing wrong with doing what you need to do… for us? Specifically?

What we’d needed to do was this. Every kid and family situation is different, but our Momo needed more stability, more attention, more time, and she didn’t know how to ask for it. Like us, she was putting on a brave face and making do with what was available, and all the burdens she couldn’t carry were dropping out of her little arms.

It’s hard to make an argument for my self-actualization, for my career and autonomy and my desire to work outside the home, when it is so clearly obvious that Momo needed me to pay closer attention. She needed me. And after… everything, after how hard I’ve struggled this whole time to balance having a career and being a caregiver, the weight of knowing I hadn’t been doing it well enough hit me like a slap. A good, hard one, though; the kind you need to wake up.

It’s unwinnable; working mothers are expected to mother like they don’t work, and are expected to work like they aren’t mothers. It’s hard not to look at the whole situation and smell how it reeks of how women are forced out of the workforce by quandaries like this every day, because by and large it’s mothers that are expected to be the ones to do so. Not everyone can pivot to freelancing, like I’ve been blessed to be able to.

So I’m feeling a really complex way about this all, right now. Relieved that we can see the light again, grateful that I’ve gotten the support I need to make art from home, angry that mothers are expected to do this so often, grieving the choice I couldn’t make to work outside the home. But mostly, I’m happy because Momo is thriving, and that’s… no matter what, that’s the important part.

Read the whole story
reconbot
1 day ago
reply
Well this made me tear up.
New York City
Share this story
Delete

20191012

1 Share

Read the whole story
reconbot
1 day ago
reply
New York City
Share this story
Delete

My Tree

3 Shares
Read the whole story
reconbot
6 days ago
reply
New York City
Share this story
Delete

By allkindsoftime in "Client: We accept the risks of you testing in prod." on MeFi

2 Shares
The tree trimming isn't even the coolest thing utilities are doing with the vegetation near their lines. I know a guy spent a year and a half with one recently working on this stuff.

Some of the leading utilities with the highest risk of lines being encroached from vegetation simply can't physically maintain all the vegetation in proximity to their assets, even if they were to full-time employ every tree contractor in North America - there's just too much vegetation growing too fast across millions of miles of conductor wires and the towers/insulators/switches that string them off the ground.

So what they instead have to do is identify where they do or do not have risk of vegetation encroachment, and then prioritize which lines will be serviced for veg management based on not just the yes/no of whether there are trees on this particular set of lines, but also on things like:
- the species of the tree*
- the density of vegetation fuel in the area**
- the impact of a fire spreading in this particular area were it to start here***
- the egress considerations of the area****
- climate considerations*****
- etc. - the list goes on, the point is there are a WHOLE LOT OF THINGS nobody thinks about that power companies have to consider since they'll be the one the public holds responsible if there is a fire.

*gray pines are much more likely to fall in than are other species
**in my home state, CalFire sets fire "tiers" where the foothills with thick underbrush and forests that are most likely to burn being the highest risk tier, descending down to the lowest risk tiers like the middle of the central valley where there are few to no trees to burn
***things like number and type of structures in the area, population density, etc.
****how do the road types (country 2-lane, intermediate highway, interstate freeway) in the area compare with the population density, thus creating an egress score that can tell you a particular town would be especially hard to evacuate people out of while simultaneously bringing fire fighting services into.
*****is this tower in proximity to the ocean and therefore at higher risk of corrosion? is this a high wind event area where weather can cause lines to go down / be impacted by veg, etc..

How do you solve for all of that, when basically you're not an energy provider so much as an asset management company that has a shit ton of assets (millions of wooden power poles and steel/composite structures with hundreds of millions of conductor miles between them) that run through the middle of all of this vegetation to bring power to people who have decided they want to build their communities in the nice shady places that are also the most likely to burn when fires run amok? Not to even get into the added complexity that in the US the policy for most of our history has been to fight and stop the fires rather than let them burn and keep ecosystems in their natural cycle, so when the fires do get loose, they burn hotter and faster than they would normally.

It's an almost impossible situation these energy companies are facing.

Fortunately they're coming up with some really novel solutions that are utilizing some of the most cutting edge technology and even in some cases building AI/Machine Learning models to help solve these problems in ways humans never really were able to at scale.

Enter LiDAR - Light Detection and Ranging, radar's more zoomed in, hyper focused cousin. You know, the technology Elon Musk says will never enable self-driving vehicles? May not be good at that level of detection, but it fits the bill for utility providers. All they have to do (natch) is pay a handful of helicopter / fixed-wing contractor outfits to strap LiDAR image detection devices to their arial vehicles, and then send them out to fly their lines, taking LiDAR shots of every inch of the line, anywhere from 60 to a couple hundred laser points per square meter. These points are beamed back to the helicopter, and then once the vehicle returns they are stitched together using complex software that accounts for things like the distance of the device from the point sensed, the speed of the vehicle housing the device, the GPS location and other metadata being captured along with the Lidar "point clouds," and a few other things. Once all stitched together they begin to form a 3D rendering of an image of whatever was between the sensor and the ground - power poles, insulators, conductor wires, birds sitting on those wires, people walking under those wires, tree branches extending out over those wires, the height of the trees next to the wires. All of those things and more haver to be classified, but you're still talking tens or hundreds of thousands of points, to have a human do it would take days just to get a span between two poles. But computers can learn how to do that really well and really fast, and all of a sudden you now have classified images of your assets and the vegetation around them.

Then you can start augmenting your existing knowledge of your asset data with annually-refreshed renderings of what the environment around this asset looked like when you last flew over it, and start to look for things like "danger trees" - trees that are in proximity to an asset and of sufficient height that were they to be knocked over could fall-in to an asset, and "risk trees" - trees unlikely to fall in for whatever reason but might have branches extending into the impact zone around a conductor wire, where those branches could fall in, thus creating an outage, that could potentially lead to an ignition. But even now that you have all this amazing wealth of data about both the assets and the environment around them, there are SO MANY lines and towers running through SO MANY remote locations that you still need to employ computer vision models to actually have computers look at all the imagery and renderings and do the math to tell you where your most at-risk lines are.

Then, once you know all of that, which has taken you dozens of teams of people and who knows how many contractors and how much dev time, you still need to operationalize it. How does it all translate down into operational plans that manage what work needs to be done where, and when, and by whom. You can't just send tree crews out willy-nilly, there are weather considerations, and terrain considerations (where the heli-saw comes in handy), and bird-nesting-season considerations (can't fly during raptor nesting in a lot of areas), and EPA restrictions, and right-of-way over private land considerations. The utility I worked with even kept a "BAD DOG" database to list landowner locations where they had a dog that was known to try to attack line maintenance engineers.

I digress. Tl;dr - maintaining vegetation with the heli saw is only one of the many ways this stuff gets done, but there's a hell of a lot that goes into it before you even get the heli-saw off the ground.
Read the whole story
reconbot
15 days ago
reply
New York City
skorgu
20 days ago
reply
Share this story
Delete

Computer-Age Typography: Hybrid Legibility Explains that Ubiquitous Check Font [ARTICLE]

1 Share


It is one of the most familiar and widely used character sets in the world, but it also looks dated or retrofuturistic, like something originally designed for use in a vintage science fiction film. The numbers are indeed old and have in fact inspired lookalike fonts used in computing and futuristic settings, but the original characters were never meant to harbingers of space-age aesthetics. Their distinctive shapes were in fact grounded entirely in an everyday problem: cashing checks at banks.

In the early 1900s, checks were still being processed manually by bank workers, which took a lot of time and consequently cost a lot money. As people wrote more and more checks, the banking industry became increasingly interested in a way to automate the process — in particular: a standardized solution that would function across institutions.

By the mid-1950s, the Stanford Research Institute and General Electric Computer Laboratory had come up with just such a system using MICR (magnetic ink character recognition) along with that now-familiar font that is still in use today. The resulting E-13B was rapidly adopted by American banks and then spread countries around the world.

In this designation, the letters E and B refer to design iterations while the number refers to the underlying 0.013-inch grid. The key to this solution was a combination of machine and human readability — if a magnetic scanning device failed, a person would need to check the numbers manually. To the machines, however, what mattered wasn’t the shape of the numbers as such but rather the magnetic waveform sensed when the checks were scanned. Essentially, each character communicated the same information but in entirely different ways for its target audiences of machine and human readers. Unlike a barcode, where the same data has to be presented in two formats, all of this information was wrapped up into a single set of legible numbers.

As Digital Check dot com explains it, “since the magnetic reader is measuring signal strength in a straight line – not capturing pixels like an ordinary camera – it’s not important where the magnetic ink is; what matters is how much ink there is in a vertical line at any given point.” To a machine, the numbers “look” entirely different, per the image below.

MICR came with other advantages, too. Its printed numbers were durable and remained machine-readable even if scuffed up or stamped ove. MICR-readable checks could be printed using existing technology. The E-13B font was a parallel success and would go on to become a standard in most English-speaking (as well as many other) countries.

A competing standard, the CMC-7, was developed around the same time in France and is used across parts of Europe and elsewhere. This font was also designed to be readable by both humans and machines, though it operates like a bar code with vertical slats and gaps.

Today, check-scanning machines use both MICR and OCR (optical character recognition) to add redundancy and further reduce errors. For the most part, this second layer of machine reading is sufficient, but every once in a while (reportedly less than 1% of the time) a human still needs to step in and double-check things, so to speak.

E-13B is also not limited to checks — it has since come to be used on coupons, credit cards, transportation tickets and more. The original character set contained only numbers and a few symbols, though, so anything letter variants seen in films, advertisements or other media actually evolved from this original and much more mundane creation.

Take for instance the Wheaton Font shown above. Its developer, Raymond Larabi, explains that “when you see an alphabet done in MICR E13B style,” like this one, it’s an “interpretation” based on the original set of numerals. In this case, he crafted “a font with just enough MICR E-13B flavor for the nerdiness to flow through but not so much that it impairs headline readability.”

The geeks behind E-13B presumably never imagining that their humbly functional here-and-now font would become so beloved, a source of inspiration for both type designers and fictional futures.

The post Computer-Age Typography: Hybrid Legibility Explains that Ubiquitous Check Font appeared first on 99% Invisible.

Read the whole story
reconbot
22 days ago
reply
New York City
Share this story
Delete

WSJ: Amazon changed search results to boost profits despite internal dissent

1 Comment and 3 Shares

Amazon changed its search algorithm in ways that boost its own products despite concerns raised by employees who opposed the move, The Wall Street Journal reported today.

The change was made late last year and was "contested internally," the WSJ reported. People who worked on the project told the WSJ that "Amazon optimized the secret algorithm that ranks listings so that instead of showing customers mainly the most-relevant and best-selling listings when they search—as it had for more than a decade—the site also gives a boost to items that are more profitable for the company."

The goal was to favor Amazon-made products as well as third-party products that rank high in "what the company calls 'contribution profit,' considered a better measure of a product's profitability because it factors in non-fixed expenses such as shipping and advertising, leaving the amount left over to cover Amazon's fixed costs," the WSJ said.

Amazon made the change indirectly, the WSJ reported. Instead of adding profitability into the algorithm itself, Amazon changed the algorithm to prioritize factors that correlate with profitability, the article said.

When contacted by Ars, Amazon said it does not optimize the ranking of its search results for profitability.

In a statement, Amazon said:

The Wall Street Journal has it wrong. We explained at length that their 'scoop' from unnamed sources was not factually accurate, but they went ahead with the story anyway. The fact is that we have not changed the criteria we use to rank search results to include profitability. We feature the products customers will want, regardless of whether they are our own brands or products offered by our selling partners. As any store would do, we consider the profitability of the products we list and feature on the site, but it is just one metric and not in any way a key driver of what we show customers.

Amazon also gave us the same statements it provided to the Journal. But these statements don't necessarily disprove the WSJ's main point, which is that Amazon changed the algorithm in ways that prioritized profitability "without adding it directly to the algorithm." Amazon did acknowledge that it examines "long-term profitability" when it tests new search features.

Amazon’s control over platform investigated

The report was published as Amazon and other big Web companies face a Congressional antitrust probe into whether they abuse the control they wield over their platforms. In a letter to Amazon CEO Jeff Bezos Friday, the House Judiciary Committee demanded executive communications about the "algorithm that determines the search ranking of products on Amazon's platform."

Amazon lawyers rejected an early proposal "to add profit directly into the algorithm" because of concerns that it would "create trouble with antitrust regulators," the Journal reported. This concern was inspired partly by a €2.42 billion fine the European Commission issued to Google in 2017 for "abus[ing] its market dominance as a search engine by giving an illegal advantage to another Google product, its comparison shopping service."

Amazon told Ars that its "private label products are only about 1% of our total sales. This is far less than other retailers, many of whom have private label products that represent 25% or more of their sales."

Amazon’s algorithm changes

Amazon "declined to discuss the inner workings of its algorithm," the WSJ report said. But the Journal report offered these details, based on its sources:

When engineers test new variables in the algorithm, Amazon gauges the results against a handful of metrics. Among these metrics: unit sales of listings and the dollar value of orders for listings. Positive results for the metrics correlated with high customer satisfaction and helped determine the ranking of listings a search presented to the customer.

Now, engineers would need to consider another metric—improving profitability—said the people who worked on the project. Variables added to the algorithm would essentially become what one of these people called "proxies" for profit: the variables would correlate with improved profitability for Amazon, but an outside observer might not be able to tell that. The variables could also inherently be good for the customer.

For the algorithm to understand what was most profitable for Amazon, the engineers had to import data on contribution profit for all items sold, these people said. The laborious process meant extracting shipping information from Amazon warehouses to calculate contribution profit.

Amazon engineers added new variables to the algorithm to ensure that search results "scored higher on the profitability metric," but a Journal source "declined to say what those new variables were," the report said.

The WSJ report continued:

A review committee that approves all additions to the algorithm has sent engineers back if their proposed variable produces search results with a lower score on the profitability metric, this person said. "You are making an incentive system for engineers to build features that directly or indirectly improve profitability," the person said. "And that's not a good thing."

Amazon's responses to the WSJ and Ars stressed that other factors can still override profitability in search results. Amazon pointed out that it recently improved the discoverability of items that could be delivered the same day even though it hurt profitability, for example.

"When we test any new features, including search features, we look at a number of metrics, including long-term profitability, to see how these new features impact the customer experience and our business as any rational store would, but we do not make decisions based on that one metric," Amazon said.

Amazon often touts its commitment to customer service, and Bezos said last year that the biggest factor in Amazon's success "is obsessive compulsive focus on the customer as opposed to obsession over the competitor."

Read the whole story
satadru
28 days ago
reply
Where my antitrust lawyers at?
New York, NY
reconbot
23 days ago
reply
New York City
Share this story
Delete
Next Page of Stories