Malika
Posts by Malika
k
One million forks in a centrifuge

For the past few weeks, I have served as Fathom’s in-residence explorer of 3D printed information design with Formlabs’ Form1+ printer. Because my goal was to focus on the physical medium and form, I tried to stay away from directly 3D-ifying data visualizations that already exist in 2D (think extruded line graphs, bar graphs, etc.), or from arbitrarily mapping data points onto 3D space for the sake of aesthetics. Instead, I zeroed in on the features of physical objects that cannot be expressed on a screen, breaking them into two categories: material and interaction. More on forks later.

To encode meaning into material itself, I considered what makes us want to pick up and touch certain objects, as well as the subtle features that cue our immediate understanding of an object – what it’s for, who uses it, how old it is. In terms of interactions, I analyzed tasks that are very efficient for humans in the 3D world: quickly sorting and distributing many small objects, using peripheral vision, rotating and seeing multiple views of an object, and focusing on a small detail while keeping the bigger picture visible in the background.

DataObjects_mindmap
Mind map of information in 3D

Most of my experiments employed texture as a way to show what an object represents. If the texture is legible and successfully references familiar objects, there is no need for a key – one piece would clearly represent grain, another meat – though both made of 3D printed plastic. My inspiration for the food-related textures came from a series of chocolates designed by Nendo, a Japan-based studio, which led me to a massive list of Japanese words with no English equivalent, all used to describe the textures of different foods.

textureTiles
Texture experiments inspired by the chocolatexture project from Japanese design studio Nendo.
Texture tiles 3D printed in black resin.
Texture tiles 3D-printed in black resin.

One example of an efficient human task that really stuck with me was the motion of quickly sorting silverware into compartments. Thinking along the lines of categorization, I remembered the card game SET from my childhood, and set out to design a 3D version. In this design, I also allowed for a hierarchy of information: certain characteristics and trends are discernible at a glance, while more subtle details reveal themselves through a closer look at a subset of pieces.

set3d
The number of prongs on each piece is immediately obvious, but subtle differences in the length of the prongs are apparent once similar pieces are grouped together.
The blocks can be stacked and combined in different ways to make the "game" more interactive.
The blocks can be stacked and combined in different ways to make the “game” more interactive.

In a more spontaneous experiment, I tested what kind of objects people are compelled to pick up and play with in order to understand them, designing a sort of handheld clock with multiple hands that nest within each other. No one (myself included) could quite figure out what it should be, though maybe that is partly what made it compelling. I also discovered some of the limitations of SLA printing, which is the method of 3D printing that uses a UV laser to harden a vat of liquid resin one layer at a time. For starters, the laser has difficulty printing articulating parts that require a certain clearance between them, as the parts tend to fuse together during the printing process. It is also challenging to clean support material out of inner channels.

Telescoping pie chart, Golden Compass, orbiting planets, turbine - what is it?
Telescoping pie chart, Golden Compass, orbiting planets, turbine – what is it?

My final series of explorations were based on data from a national public libraries survey, containing indicators on library use and spending from 1992-2013. I chose to really push the idea that objects can tell stories, with the example of sea glass in mind. Sea glass experts can tell how old a piece of glass is, what kind of bottle it came from, and where it was made – all from subtle cues like texture, color, thinness, purity, and knowledge of trade routes.

I let the content of the data itself inform my design: referencing the metal type used in letterpress printing, with each piece of type representing a state in 2003 (left) and 2013 (right). The height of each piece reflects how much money each state’s libraries spent on printed material per capita (as of 2013), and the amount of wear of each piece on the right indicates how much spending on printed material was cut in the last decade.

libraryType
Massachusetts was one of the biggest supporters of the printed word in 2013 – spending $3.51 per capita on books, journals, and magazines. Maine spent slightly less ($2.49), but has hardly reduced its spending since 2003. Georgia not only spends the least on printed materials ($1.02), but also cut its spending the most drastically in the last decade ($2.80 per capita in 2003).

letter_print

Tackling such an open-ended prompt, I feel I only scratched the surface of 3D printed information design, and further exploration is definitely in order. Some other ideas thrown around the studio were to make metal casts from 3D printed parts, design for different abilities (e.g. blindness), use mechanical properties like friction to sort how easily different parts slide across a table, and create a big data analogue to the silverware in a drawer: one million forks in a centrifuge.

k
Colorful Language

Continuing our fascination with color naming across cultures, we set our summer intern, Malika Khurana, on a journey to discover new colors. Color naming, no matter your language, is a verbal process. So one of the driving interests was to see how this could be integrated into a mobile app using speech recognition. In this post, Malika retells her adventure.

The World Color Survey (WCS) was an anthropological study conducted in the 1970s that used color to study the effect that culture may have on language. Field workers surveyed 2,696 native speakers, representing 110 unwritten languages, by asking them to name each carefully chosen set of color chips (many of which are difficult to categorize into our basic colors in English).

Terrence and I took the 330-question survey and found the results compelling. We expected that some colors would be closer in comparison to others, but didn’t expect to have different words for the same colors.

Comparison of my and Terrence’s detailed color profile.
Malika and Terrence’s survey results show on the left comparisons of color blocks that were given the same name, and on the right similar colors that had different names.

For the most part, my colors are consistently darker than his, possibly because I didn’t name a single color “black”. The two of us have very different ideas of what teal and violet are. After some investigation, my “teal” is closer to the dictionary definition of teal, but violet is more loosely defined as anything between purple and blue on the color wheel. No one here supports color brainwashing though — teal is whatever you want it to be!

Screen Shot 2014-07-16 at 6.58.38 PM

We both made up our own names for the unappealing range of greenish yellows (“badness” and “gross”), and when it comes to light blues Terrence uses the the sky as a reference while I reference the water. This may be because I was a competitive swimmer for ten years so I’ve always felt some innate connection to water.

Malika’s complete results after taking the survey.
Malika’s complete survey results.
Terrence’s complete results after taking the survey.
Terrence’s complete survey results.
An example of one of the languages from the World Color Survey where the participant collectively used only three color terms to describe the entire color spectrum.
This graphic is a sample of the results from the World Color Survey. The three color blocks highlighted are results from one language in which the people used only three color terms to name the same colors Malika and Terrence were shown.
These are the 330 Munsell color chips that participants were asked to name in the World Color Survey and in our Colorful Language app. Courtesy of the WCS.
These are the 330 Munsell color chips that participants are asked to name in our Colorful Language app. Image courtesy of the World Color Survey.

Building the app

When I arrived at Fathom, Ben and Terrence approached me with an idea to take the WCS one step further. They wanted to create an app that would make it easy for anyone to take the World Color Survey. The main difference from the WCS is that our app focuses on how people name colors differently within the English language. For example, what I call teal is different from Terrence’s definition of teal.

I built the app in Processing for Android, along with some Android and Java libraries, as it seemed like the easiest route given my previous experience with Java. It was convenient to be able to pull from any of the libraries and implement the same thing at least two different ways, but at times it was tricky to figure out which of those ways was best.

We chose to implement the survey using Android’s built-in speech recognition. Speaking your responses makes the survey easier to complete, and it doesn’t limit users to those who know how to spell. It is also closer to how the original WCS was administered, verbally and in person. Besides, it’s fun to make people think you’re crazy when you’re on the train enthusiastically shouting colors into your phone.

“Blue.”

“Aquamarine!”

“MAUVE!!”

I found a handy example for using Android’s speech API, and then I was off! When you speak a color into the speech recognizer, the app suggests up to three possible words it thinks you said. The WCS looked for single word responses, so I coded the speech recognizer to return up to three possible words and then omit compound words or capitalized duplicates (the speech recognizer counts “blue” and “Blue” as different words). The app then displays the top three remaining results so one can be confirmed.

The app flow for naming a color

Aside from the obvious motivation to name 330 colors in the survey as a way to help us better understand human perception and culture, we considered ways to encourage users to complete the entire survey we created a live updating “color profile” that grows as you name more colors and acts as a UI element.

Visualizations of your color profile data update every time you name a new color.

Each block of color in the detailed color profile is a pixel-by-pixel composite of each of the colors that were given the same name. From far away you can see what your average “peach” looks like, while close up you can see each of the different colors you’ve named “peach”.

From far away, your eyes adjust and average the colors, kind of like a pointillist painting.

To create the color profile I had to regenerate and render a mini data visualization every time the user names a new color, essentially on every page. I very quickly learned the importance of efficient for-loops, especially when rendering such heavy images. To further improve efficiency, the app only downloads a user’s color profile from the server when it is first launched. Once the app is running, it stores and updates that information locally as the user continues to name subsequent colors, and only sends updates to the server to keep it in sync.

A few years ago, webcomic xkcd conducted their own survey to see how people name colors differently.

We have more hypothesis we’d like to test so please, if you have an Android, download the app here and read our other posts about the color kit and the color “grue”.

Founded in 2010 by Ben Fry, Fathom Information Design works with clients to understand complex data through interactive tools and software for mobile devices, the web, and large format installations. Out of its studio in Boston, Fathom partners with Fortune 500s and non-profit organizations across sectors, including health care, education, financial services, media, technology, and consumer products.

How can we help? hello@fathom.info.