I'm a char salesmen. I share things about; Programming, SysOps, Civil Rights, education, and things that make me happy. And robots.
1428 stories
·
18 followers

Removing Racism from Thingiverse

1 Comment and 2 Shares

Thingiverse

A teacher contacted Adafruit about racist content on Thingiverse and asked for our help to encourage MakerBot, Thingiverse and Stratasys to continue to enforce its existing rules and remove the hatred, white supremacy, bigotry, racism, antisemitism, and violence from the Thingiverse platform.

On June 24th, 2020 we emailed Nadav Goshen the CEO at MakerBot, MakerBot PR, and Stratasys the parent company. Included were specific examples which were on the site, we also included meaningful actions to take to not only remove the hateful content, but also ways for Thingiverse to take ongoing actions against racist content.

And good news, MakerBot (Thingiverse) replied and removed the content, same day, within hours.

We informed the teacher that contacted us, and we’d like to thank MakerBot for taking quick action.

The work is not done, if you see racist content on Thingiverse click “Report Thing” and report it. If there are other 3d printing repositories with racist content, let them know.

MakerBot’s response to our email (images removed) is below.

From: MakerBot PR pr@makerbot.com
Subject: Re: Removing Racism from Thingiverse
Date: June 24, 2020 at 7:29:36 PM EDT

Dear Limor Fried, Phillip Torrone, and Adafruit Industries,

Thank you for bringing these concerns, examples, and suggested meaningful actions to our attention.

At MakerBot, we stand firmly in support of movements that support equality and are against injustice, violence, and systemic racism. We are committed to fostering a diverse and inclusive community, within the company, and the 3D printing community, including the Thingiverse community.

As you correctly note in your letter, our Term of Use (available at https://www.makerbot.com/legal/terms) includes an Acceptable Use Policy that prohibits, among other things, “racism, bigotry, hatred, or physical harm of any kind against any group or individual.”

We encourage the Thingiverse community to actively report violations of our Terms of Use, including the Acceptable Use Policy, through a “Report Thing” link that we include on each Thing (i.e., design) web page. We review all reports that we receive from this “Report Thing” link or are otherwise brought to our attention, and take enforcement action(s) as detailed in our Terms of Use.

Concerning the specific content you note in your letter, as of June 24, 2020, the content is no longer available to the public and the users who posted such content have been notified of their violation of our Terms of Use.

Best,
MakerBot Industries

Read the whole story
reconbot
5 days ago
reply
New York City
Share this story
Delete
1 public comment
jepler
10 days ago
reply
Thank you MakerBot.
Earth, Sol system, Western spiral arm

Password Changing After a Breach

3 Shares

This study shows that most people don't change their passwords after a breach, and if they do they change it to a weaker password.

Abstract: To protect against misuse of passwords compromised in a breach, consumers should promptly change affected passwords and any similar passwords on other accounts. Ideally, affected companies should strongly encourage this behavior and have mechanisms in place to mitigate harm. In order to make recommendations to companies about how to help their users perform these and other security-enhancing actions after breaches, we must first have some understanding of the current effectiveness of companies' post-breach practices. To study the effectiveness of password-related breach notifications and practices enforced after a breach, we examine­ -- based on real-world password data from 249 participants­ -- whether and how constructively participants changed their passwords after a breach announcement.

Of the 249 participants, 63 had accounts on breached domains;only 33% of the 63 changed their passwords and only 13% (of 63)did so within three months of the announcement. New passwords were on average 1.3× stronger than old passwords (when comparing log10-transformed strength), though most were weaker or of equal strength. Concerningly, new passwords were overall more similar to participants' other passwords, and participants rarely changed passwords on other sites even when these were the same or similar to their password on the breached domain.Our results highlight the need for more rigorous password-changing requirements following a breach and more effective breach notifications that deliver comprehensive advice.

News article.

Read the whole story
reconbot
5 days ago
reply
New York City
Share this story
Delete

Old Days 2

1 Comment and 12 Shares
The git vehicle fleet eventually pivoted to selling ice cream, but some holdovers remain. If you flag down an ice cream truck and hand the driver a floppy disk, a few hours later you'll get an invite to a git repo.
Read the whole story
reconbot
10 days ago
reply
New York City
Share this story
Delete
1 public comment
alt_text_bot
11 days ago
reply
The git vehicle fleet eventually pivoted to selling ice cream, but some holdovers remain. If you flag down an ice cream truck and hand the driver a floppy disk, a few hours later you'll get an invite to a git repo.
crazyscottie
11 days ago
FWIW, the comics and alt text are working for me on mobile. Is alt_text_bot still needed?
dukeofwulf
11 days ago
I like it. Even on desktop it's easier for me to read than the hover text.
AlexHogan
11 days ago
I like it too. Good Bot!
acdha
11 days ago
It’s a tradition at this point
steelhorse
11 days ago
For the longest time I couldn't finish my program because someone had misplaced the "blue" punch card.
9a3eedi
10 days ago
Whoever made this bot, I love you

CONTACT YOUR ELECTED OFFICIALS CONTACT YOUR ELECTED...

1 Share












CONTACT YOUR ELECTED OFFICIALS

CONTACT YOUR ELECTED OFFICIALS 

CONTACT YOUR ELECTED OFFICIALS 

CONTACT YOUR ELECTED OFFICIALS

Read the whole story
reconbot
27 days ago
reply
New York City
Share this story
Delete

My People

2 Shares
Good for them.
In one Slack room, two employees who work in the Times’ customer service center were stunned by the rate of cancellations.
Employee A: they have to first get aligned on what the company is going to say

which is always tougher

Employee B: 172 cancels so far for this…. every time I refresh it just grows faster and faster

Employee A: 203 editorial cancellations between 4 - 5 = the highest hourly total ever in the data we have

buckle up everyone!
Read the whole story
reconbot
29 days ago
reply
New York City
skorgu
31 days ago
reply
Share this story
Delete

From Print to Digital: Making Over a Million Archived Photos Searchable

1 Share

A team of technicians have scanned over a million photos into a New York Times database. It took a team of technologists to make those photos searchable.

By Jonathan Henry

Illustration by Suzie Shin; Photographs from The New York Times Archive

A block away from the hustle and bustle of Times Square in New York City, buried three floors below street level, lies The New York Times archive. The archive is housed in a sprawling room that is packed with hundreds of steel filing cabinets and cardboard boxes, each containing news clippings, encyclopaedias, photographs and other archival material.

Started in the late 1800s, the archive first served as a collection of news clippings about newsworthy events and people. In the late 1960s, it was merged with a photo library managed by The Times’s art department. The archive (which is sometimes referred to as “the morgue”) now contains tens of millions of news clippings and an estimated five million printed photographs.

Many of these historical documents are available only in print form, however in 2018, The Times embarked on a project — as part of a technology and advertising collaboration with Google — to preserve the photographs in the collection and store them digitally. A team of technicians manually scan about 1,000 photographs per day into a server, and in July, 2019, they scanned their one millionth photograph.

Many of these photographs have found a new life in stories produced by The Times’s archival storytelling project, Past Tense.

With a digital photographic archive now at over a million scans, we needed to build an asset management system that allows Times journalists to search and browse through the photos in the archive from their laptops.

A digital system inspired by the archive

To architect our asset management system, we drew inspiration from the archive itself. The organization strategy in the physical archive is loosely similar to the Dewey decimal system, where an index references the location of photos associated with a subject.

The archive contains well over 700,000 index cards that are alphabetically sorted by subject from A. Cappella Chapel Choir to ZZ Top. Each index card contains the location of the folder in which the corresponding collection of photos can be found. Occasionally, a subject is divided into subtopics with multiple references to different folders.

As an example, an index card about Amelia Earhart is further broken down into multiple subtopics, such as portraits, individual snapshots and European receptions.

An index card indicating where to find photos of Amelia Earhart.

If we were to be interested in European Receptions, we would find the folder labeled “4794-L-8.”

Some folder covers may contain additional text that provides a high-level description about the collection of photos found within the folder; this text is especially useful when it’s the only text associated with a photo.

The front cover of a folder containing photos of Amelia Earhart.

The backs of individual photos usually contain contextual data, such as stamps of publication dates and folder names, handwritten notes, crop marks and taped news clippings indicating publication.

Left: Amelia Earhart at a dinner hosted by the Aero Club at the Palais d’Orsay. Right: the back of the photo showing contextual information about the photograph.

From print to screen

The team of technicians scan folders and photos; the index card catalog was scanned a couple of years ago. Technicians scan the fronts of the folders and both sides of the photos. Preserving this contextual data is of utmost importance since it is needed to classify and index data for our internal search tool.

We store the scanned photos as TIFFs because the file format uses lossless compression, or no compression at all, which ensures we save the full quality of the archived images.

Once several filing cabinet drawers have been scanned, the photos are then uploaded to Google Cloud Storage (GCS) and sent through the ingestion pipeline, which involves several Go microservices running in a Google Kubernetes Engine (GKE) cluster. Each service communicates with the others via Cloud Pub/Sub, which is used to asynchronously deliver events to each service. In our case, the event we care about is uploading an image to GCS. When an image is uploaded, a Pub/Sub notification gets published and our ingestion process begins.

Converting images for the web

To prepare the image for potential publication on the Times website or apps, this service converts the image from TIFF to JPEG. We do this for two reasons: JPEGs tend to be more efficient for the web and they also tend to offer a better size-to-quality ratio. The JPEG is then resized twice in order to store additional dimensions of the photo.

Additional data such as GCS file path, photo side (front or back), folder and drawer association are stored in a Cloud SQL Postgres database for other services to query. All JPEG copies are then saved to GCS, which triggers a Pub/Sub notification to a topic subscribed to by our analyzer service for the next step of the pipeline.

Making Images Searchable

In order to make images searchable, we first need to digitally extract the text from the photos.

We built an analyzer service to store contextual data from the photos. To extract text from our images we decided to use Google’s Vision API, which provides optical character recognition (the process of converting handwritten or printed text into machine-encoded text) and label classification of images via pre-trained models.

After we extract text from the photos with the Vision API, we save the results to our Postgres DB. We then normalize and index the data so it can be searched.

The final step of the ingestion pipeline is indexing and structuring photo metadata to perform full-text queries. We utilize ElasticSearch, which is a near real-time search platform that is built on top of Lucene, an open-source full-text search engine.

Although indexing and structuring metadata is straightforward for index cards, it gets tricky for folders and photos. A folder might lack the text for a meaningful search result, so we map it to its associated index cards and parse the relevant text describing the contents of the folder. The same is done for photos, but the additional text is gathered from its folder. This is where we leverage Postgres to query for the folder-to-index-cards and photo-to-folder relationship.

Once we’ve built these relationships and parsed all relevant text, we store our data in ElasticSearch. This data then becomes immediately accessible to Times journalists via a searchable interface in our asset management tool.

Although we still have millions of photos left to scan, Times journalists can currently search over one million photos and the complete index card catalogue in the archive. We continually improve the search experience by cleaning data that might result from an imperfect optical character recognition result, and we continue to experiment with new methods that will allow us to gain better insight into how to structure and classify our images.

Illustration by Suzie Shin; Photographs from The New York Times Archive

Jonathan Henry previously served as the tech lead for the Photo team at The New York Times. He is currently an engineer at Spotify.

Photographs from The New York Times Archive. Top collage photographs by, clockwise from left: Sam Falk/The New York Times, Eddie Hausner/The New York Times, Chester Higgins Jr./The New York Times, Sam Falk/The New York Times, Ruby Washington/The New York Times, Larry C. Morris/The New York Times and Sam Falk/The New York Times.

End collage photographs by, clockwise from left: Tim Koors for The New York Times, Larry C. Morris/The New York Times and D Gorton/The New York Times.


From Print to Digital: Making Over a Million Archived Photos Searchable was originally published in NYT Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
reconbot
42 days ago
reply
New York City
Share this story
Delete
Next Page of Stories