Compressors

+

Reciprocating Compressor

  KCM Series in R-134a
  KCN Series
  KCJ Series
  KCE Series
  CR Series
  KCM Series in R-404A
+

Semi-Hermetic

+

Stream Compressors

+

Scroll Compressor

  ZR Series
  ZB Series
  ZF Series
  ZP Series

Flow Control

+

Cage

+

Moisture Indicator

+

Pressure Switch

+

Ball Valve

+

Power Assembly

+

Solenoid Valve

+

Thermal Expansion Valve

+

Filter Core

+

Oil Seperator

+

Pressure Transmitter

+

Accumulator

+

Filter Driers

  Liquid line Filter Drier
  Suction Line Filter Drier
  Liquid and Suction Line Filter Drier

Condensing Units

+

Process and Milk Cooling CDU

+

ZX Scroll CDU

+

Semi-Hermetic CDU

Awards

A tale of two audiences

Consider a health information site with two sets of fact sheets: A simplified version for the lay audience and a technical version for physicians. During testing, a physician participant reading the technical version stopped to say, “Look. I have five minutes in between patients to get the gist of this information. I’m not conducting research on the topic, I just want to learn enough to talk to my patients about it. If I can’t figure it out quickly, I can’t use it.” We’d made some incorrect assumptions about each audience’s needs and we would have missed this important revelation had we not tested the content.

You’re doing it wrong

Have you ever asked a user the following questions about your content?

 

How did you like that information?

Did you understand what you read?

 

It’s tempting to ask these questions, but they won’t help you assess whether your content is appropriate for your audience. The “like” question is popular—particularly in market research—but it’s irrelevant in design research because whether you like something has little to do with whether you understand it or will use it. Dan Formosa provides a great explanation about why you should avoid asking people what they like during user research. For what’s wrong with the “understand” question, it helps to know a little bit about how people read.

The reading process

Reading is a product of two simultaneous cognitive elements: decoding and comprehension.

When we first begin to read, we learn that certain symbols stand for concepts. We start by recognizing letters and associating the forms with the sounds they represent. Then we move to recognizing entire words and what they mean. Once we’ve processed those individual words, we can move on to comprehension: Figuring out what the writer meant by stringing those words together. It’s difficult work, particularly if you’re just learning to read or you’re one of the nearly who have low literacy skills.

While it’s tempting to have someone read your text and ask them if they understood it, you shouldn’t rely on a simple “yes” answer. It’s possible to recognize every word (decode), yet misunderstand the intended meaning (comprehend). You’ve probably experienced this yourself: Ever read something only to reach the end and realize you don’t understand what you just read? You recognize every word, but because the writing isn’t clear, or you’re tired, the meaning of the passage escapes you. Remember, too, that if someone misinterpreted what they read, there’s no way to know unless you ask questions to assess their comprehension.

So how do you find out whether your content will work for your users? Let’s look at how to predict whether it will work (without users) and test whether it does work (with users).

Estimate it

Readability formulas measure the elements of writing that can be quantified, such as the length of words and sentences, to predict the skill level required to understand them. They can be a quick, easy, and cheap way to estimate whether a text will be too difficult for the intended audience. The results are easy to understand: many state the approximate U.S. grade level of the text.

You can buy readability software. There are also free online tools from Added Bytes, Juicy Studio, and Edit Central; and there’s always the Flesch-Kincaid Grade Level formula in Microsoft Word.

But there is a big problem with readability formulas: Most features that make text easy to understand—like content, organization, and layout—can’t be measured mathematically. Using short words and simple sentences doesn’t guarantee that your text will be readable. Nor do readability formulas assess meaning. Not at all. For example, take the following sentence from A List Apart’s About page and plug it into a readability formula. The SMOG Index estimates that you need a third grade education to understand it:

 

We get more mail in a day than we could read in a week.

 

Now, rearrange the words into something nonsensical. The result: still third grade.

 

In day we mail than a week get more in a could we read.

 

Readability formulas can help you predict the difficulty level of text and help you argue for funding to test it with users. But don’t rely on them as your only evaluation method. And don’t rewrite just to satisfy a formula. Remember, readability formulas estimate how difficult a piece of writing is. They can’t teach you how to write understandable copy.