Introduction to touch and multi-touch interfaces

Touch-screens are device screens that provide interaction interface to various operating systems by using touch. Nowadays touch-screens are being used in a wide variety of devices and particularly in mobile computing with include laptops, smartphones, handhelds and lately tablet PCs.

Touch-screen usage is approved to be straightforward, as touch-screen interaction is designed to be intuitive and quick to learn compared to other pointing devices ((Benko, H., Wilson, A. D., & Baudisch, P. (2006). Precise selection techniques for multi-touch screens. Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 1263-1272). ACM.)), ((Ichikawa, H., Homma, M., & Umemura, M. (1999). An experimental evaluation of input devices for pointing work. International Journal of Production Economics, 60-61, 235-240. doi:10.1016/S0925-5273,98,00162-5.)). Touch-screens are used for interacting with many different types of applications in many different contexts and often replace traditional pointing devices such as the mouse and keyboard ((Stone, D. L., & Stone, D. (2005). User interface design and evaluation. Morgan Kaufmann.)).

Touch-screen technology

Touchscreen technology is not a new technology and nowadays is widely used in everyday portable devices like PDAs, smartphones, internet tablets, but also in a variety of larger installations like ATMs, information kiosks, interactive installations, art installations etc.

It is a fact that touch UIs increased the usability of applications and proved to be more natural to use compared to the traditional keyboard and mouse interfaces ((Hinckley, K., & Sinclair, M. (1999). Touch-sensing input devices. In Proceed ings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit (pp. 223-230). Pittsburgh, Pennsylvania, United States: ACM. doi: 10.1145/302979.303045.)), ((Ullmer, B., & Ishii, H. (2000). Emerging frameworks for tangible user interfaces. IBM systems journal, 39(3), 915-931.)).

Touch Screen Functional Description

Touch screen functionality in terms of user interaction, is  related to the determination of an input (gesture, touch/contact) and its location to the environment (screen). Identification of the touch input is based on the basic principle of  finger/interface contact, though in many cases recognition of finger/stylus/hand position is determined without the necessity of absolute contact ((especially in the case of camera based installations)).

touch

In principle, to determine a touch location in its simplest implementation, two measurements are required, one to obtain an X-axis coordinate and one to obtain a Y-axis coordinate.

screen

Touch measurements are then converted to point coordinates which in turn are reported to the host (PC or microcontroller) through a serial communications port (USB, RS232).

Touch input is detected by a touch sensor. There are currently two well known systems that are used to recognize touch interaction ((Other technologies are also available, but are beyond the scope of this document. These include Surface acoustic wave, Infrared, Optical, Dispersive signal technology, Acoustic pulse recognition. Look at: touchscreens )) :

  • Resistive
  • Capacitive

Resistive touch interfaces

The resistive system consists of a normal glass panel (usually attached in front of a screen) that is covered with a conductive and a resistive metallic layer. These two layers are held apart by tiny spacers. A scratch-resistant layer is placed on top of the whole setup to prevent device damages. The resistive system works by the use of an electrical current which runs through the two layers while the monitor is operational.
When a user interacts by touching the screen, the two layers make contact in that exact spot where the film is pressed. The change in the electrical field is noted and the coordinates of the point of contact are sent to the computer. Once the coordinates are known, an interface driver translates the touch into something that the operating system can understand, much as a computer mouse driver translates a mouse’s movements into a click or a drag. Depending on a calibration that was previously done by the user the pointer is displayed on screen or events take place.

The actual process is happening by touching the top surface which in turn compresses the flexible top layer to the supported bottom layer causing electrical contact of the two layers between the span of insulating dots. Those insulating dots have known positions on the surface defined by the manufacturer and the position of the touching point is calculated in respect to those positions, the resistance that is  measured at the ends of  the conductive material and the calibration that the user performed during the initialisation of the device.

In practice, when a stylus or finger is used, it causes a conductive top layer to bend downwards, making electrical contact with another conductive layer. A voltage is applied to the top layer, which varies according to position; from this voltage variation, the touch position can be measured.

Advantages and Usage :: Cost effective solutions, Activated by a stylus, a finger or gloved hand, Functions even if damaged. Applications include, healthcare, hospitality, industrial or work environments such as factories, restaurants etc. The resistive technology is used in a variety of devices and applications PDAs, web phones, other handheld consumer applications as well as in industrial systems and their UIs.

Disadvantages ::  not very responsive compared to capacitive, do not support multi-touch, require pressure and thus can be damaged, by the user, if bend because of temperature can become dysfunctional, lose their calibration over time, include only 75% optical transparency, sharp objects can damage the resistive layers.

Capacitive

Capacitive touch-screens use electrodes to measure the conductive properties of objects that touch them (i.e. a finger). A very simple touch user interface such as a single-touch switch can be built with a single sense electrode, as illustrated in the following figure. A measurement circuit measures the capacitances of the electrode by charging and discharging the capacitor formed by the sense with surroundings.

capacitive

The touch screen’s touchpad contains a two-layer grid of electrodes that are connected to a mixed signal integrated circuit (IC) mounted on the reverse side of the pad. The upper layer contains vertical electrode strips while the lower layer is composed of horizontal electrode strips. The IC measures “Mutual capacitance” from each of the horizontal electrodes to each of the vertical electrodes. A human finger near the intersection of two electrodes modifies the mutual capacitance between them, since a finger has conductive properties. When a user touches the screen, some of the charge is transferred to the user, and makes the potential difference on the screen. After the panel controller recognizes that, the controller will send the X-Y axis information to the PC port.

capacitive-how-final

For instance the Ipod/iphone touch is designed as follows:

Advantages and Usage :: The advantage is that capacitive technology transmits almost 90% percent of the light from the screen. Smooth and responsive interaction: capacitive have better performance than resistive technology for user interaction.

Disadvantages :: A bare finger or other conductive material is required for the screen to operate, glass screens easily shatter when dropped, control electronics for capacitive touch are more complex than resistive technology and hence more expensive.

Differences in Capacitive and Resistive Touch-screen mobile devices

  1. Restive touch-screens are activated by a finger’s pressure on the display’s surface while Capacitive touch-screens are activated by detecting a finger’s electrical charge on the display’s surface.
  2. In terms of technological differences resistive touch-screens are consisted by two thin film layers of conductive, transparent thin film that is attached above a normal screen. These thin film layers measure the change in resistance between them when the user touches and presses their surface. Capacitive touch-screens measure the electrical signal that is produced on their transparent thin grid that exists on top the actual screen.
  3. Today (2011) resistive touch-screens are cheaper to make, but don’t support multi-touch.
  4. Capacitive provide clear screen transparency while resistive less-than-perfect transparency.
  5. In resistive touch-screens multi-touch functionality is not supported. So, when a user presses with more than one finger the device can’t determine the position of the multiple touches but instead produces an average of the two.
  6. Over time, resistive touch-screens drift slowly lose their calibration and need to be re-calibrated (the user has to run the calibration software and identify the positions of some markers on screen. By doing this the resistive touch device is calibrated according to the underlying screen).
  7. Pressing is necessary on resistive touch-screens while capacitive do not require it.
  8. You can use any type of stylus on a resistive while on capacitive touch-screens the user has to interact with specific styluses with conductive tips.
  9. Resistive touch-screens operate in environments and situations where the user interacts with the device by wearing gloves or when they do not directly touch the screen with a conductive surface (skin). For example, in industrial environments where the users/workers wear gloves or have liquids or dirt on their hands resistive touch-screens will be operating normally while capacitive will probably fail.

Examples of devices with capacitive touchscreens are the iPhone, iPod Touch, and G1 and Nexus Android phones. Devices with resistive touchscreens include the LG Viewty phone and the Nokia Internet Tablets.

Touch-screens: Natural User Interfaces (NUIs)

User interfaces once more face a paradigm shift. From Command Line Interfaces CLIs to Graphical User Interface (GUIs) to Natural User Interfaces NUIs.

A NUI is considered the interface that allows people to use their natural behaviors of everyday life to interact directly with interfaces of computing devices and systems. According to Cartan, the evolution of Natural user interfaces offer four defining characteristics:

  1. Direct, natural input: 3D gestures, speech recognition, facial expressions, and anything else that comes naturally, but for now it mostly means multi-touch.
  2. Realistic, real-time output: In order to harness natural responses, a NUI output has to be as fast and convincing as nature itself. When the user makes a natural gesture like a pinch, the display has to respond in an animated, often photorealistic way in real time, or else the illusion will be broken.
  3. Content, not chrome: The GUIs of today, with their windows and icons and menus, are laden with visual signals and controls, or “chrome”. This is one of the most unnatural features of a computer interface and tends to distract users from the actual content they are trying to work with. A NUI strips most of this away and lets users focus on one thing at a time.
  4. Immediate consequences: a NUI is not just spatially realistic, but temporally realistic as well. In the real world, actions have immediate consequences. If you want to go swimming, you don’t have to wait for a river to “boot up”. Splashes happen as you swim and will not be lost if you forget to save them. Similarly, NUI devices and applications start instantly and stop on a dime. Changes are saved as you go.

Touch-screens are considered NUIs primarily because they support a number of different touch interfaces to human, including touch, multi-touch, hand gestures, hand writing etc.

Interacting with Mobile Touch-screens

The Basic Design Concepts of Multi-Touch Interaction on Mobile devices

Using touch for input in mobile devices has the following characteristics:

  • Natural and intuitive. pointing with a finger and touch things is a natural human activity, thus interactions with elements of a UI should be designed in such a way to correspond to how users interact with objects in the real world. This process of translating natural interactions to the mobile platform should provide consistency and reliability in terms of realism and responsiveness .
  • Direct and engaging. touch interaction provides to the users the feeling of directedness primarily because there is no in-between device for them. Interactions take place in a natural way while the device of the touchscreen itself is “transparent”, thus do not disengage users from their tasks.
  • Portable & minimal. Devices that support touch technologies can be more minimalistic and compact compared to office layouts that include mouse keyboard, touchpad, trackball and other devices. This saves space and provides great portability. In mobile devices like smartphones it also gives the designers and manufacturers the capability to design smaller and more lightweight and mechanically simple hardware.
  • Less intrusive to the social environment. In many social contexts using touch devices provides discreetness and is more intrusive compared to the traditional keyboard. As a result touchscreen interaction is much less distracting than typing or clicking, especially in social contexts where silence is necessary.
  • Accuracy. In early designs users required more effort to target objects as accurately using touch, compared to a mouse or pen.Today hardware and software implementations improved interaction (see below).

General Guidelines

Designing touch interaction requires attention to the following:

  • Small objects are difficult to touch. The graphic of an object and its touch target area can be different (touch area can be larger for small elements, if they do not overlap with other active elements)
  • Object spacing is important. Horizontal and vertical spacing between objects is important to be measured according to the screen size (depends of the device size)
  • Object proximity is important. Moving between objects that are located far away between them can be tiring to the user, especially when it is necessary to use different finger (for large mobile screens)
  • Text input is not easy with traditional QWERTY keyboard layouts
  • Hovering over Objects is difficult or impossible (Touch-screen that detect depth overcome this limitation).
  • Right clicking takes time.
  • Small targets near the edge of the display are difficult to touch and thus manipulate. For example toolbars window controls sliders over the edge are hard to interact with.
  • Draging an object is often difficult because of finger occlusion and the necessity of continuity of touch.
  • We try to avoid assuming that the user will use the trackback instead of the touchscreen
  • Helpers over the text should provide visual feedback and big enough handles for the user to interact with.
  • Nested menus over active background elements are often difficult to control (i.e. menus over hypertext)
  • Multiple selections: it is hard to do multiple selections.
  • Zomming to the content is necessary feature.
  • Offer the ability for the users to manipulate the size of common controls on the fly. Different users have different difficulties with different UI control objects.
  • Accidental manipulation: Flicks and other user gestures often interact with screen elements (by mistake!)
  • Forgiveness: Users often select screen elements by accidents. We should give them the ability to undo.

  • It is better to use a common element to the operating system at hand instead of inventing a similar one.
  • Always explain and give examples of the gestures that you might use. Users cannot imagine them.

Best Practices of Touch Screen Interface Design for Mobile devices

  • Responsiveness: Both hardware and software interfaces should be responsive and fast enough for physical interaction . Users performing physical interaction with systems expect faster response times compared to mouse pointing or keyboard pressing primarily because the need to put more effort in pressing buttons, dragging objects etc.
  • Interface layout, Screen dimensions / object proportions: compared to traditional pointing devices where a cursor is used touch screen interaction needs more space because of the size of the fingers. Interacting space is not necessarily connected to large graphic elements but can also be achieved through larger invisible interaction areas around active elements or through algorithms that determine the the proximity of the touch point and its surrounding active elements ((P. Parhi, A. K Karlson, and B. B Bederson, “Target size study for one-handed thumb use on small touchscreen devices,” in Proceedings of the 8th conference on Human-computer interaction with mobile devices and services (ACM, 2006), 203-210.)), ((A. Sears and B. Shneiderman, “High precision touchscreens: design strategies and comparisons with a mouse,” International Journal of Man-Machine Studies 34, no. 4 (1991): 593-613.)). Also important is where objects are placed on screen depending on the interaction scenario that the user is about to perform. For instance typing is different compared to browsing or image scaling and the user is expected to use different hand postures and gestures to achieve the task ((J. O Wobbrock, B. A Myers, and H. H Aung, “The performance of hand postures in front-and back-of-device interaction for mobile computing,” International Journal of Human-Computer Studies 66, no. 12 (2008): 857-875.)).
  • Design for all: we should always consider that users have different profiles and different preferences when interacting with computing devices. Thus accessibility ((S. K Kane, J. P Bigham, and J. O Wobbrock, “Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques,” in Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility (ACM, 2008), 73-80.)) and personalisation should be of great importance in our designs. For example we should not underestimate the fact that left handed and right handed users have different profiles when interacting.
  • Screen colours and brightness: Screen colours (background and foreground) should be carefully selected primarily because interaction on touch screens always produces glare effects mainly because of fingerprints or other dirt on the screen surface.
  • Read, write or manipulate?: Almost all cell phones contain fonts that are too small to be read with ease. The screens are small in size and therefore, the fonts need to be small-sized to fit in. While you, as a developer, cannot do anything about the mobile phone’s default font size, you can definitely try and make the fonts as large as possible for your specific app. This will increase the usability quotient of your app.
    Another fact is that mobile devices are more easily used when reading compared to manipulating and inputting data. This happens because typing (the traditional method of data input) is difficult and the main reason is the QWERTY keyboard metaphor. QWERTY Keyboards on small screens are hard to use and often frustrate users:
  • single or double finger input,
  • many typing mistakes because of screen size,
  • slow response times,
  • single touch  / multi-touch
  • parallax with eye-to-finger or eye-to-stylus position
  • occlusion effect etc.
  • Alternatives include: [see Lecture_01 – Interacting with mobile smartphones]
  • Examples:
    • Swype, SlideIT, ShapeWriter
    • BlindType
    • Gesture Search
    • 8pen keyboard
  • Interaction methods and techniques:
    • Pointing
    • Tap / Click ((Tap is the new Click))
    • Selecting
    • Draging
    • Pinch
    • Rotate
    • Pan
    • The importance of tactile feedback ((E. Hoggan, S. A Brewster, and J. Johnston, “Investigating the effectiveness of tactile feedback for mobile touchscreens,” in Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems (ACM, 2008), 1573-1582.))

 

Graphical User Interfaces for Touch and Multi-touch mobile phones

  • Bubbles – a GUI for touchscreen mobiles 
  • Fuse UI
  • 3D UI TAT Cascades
  • Digital Aria 3D
  • SPB Mobile UI (3D)

     

     

  • Lavra Labs UI


Touch Usability on mobile devices

Dan Saffer provides a set of  illustrations of “activity zones” on touchscreen phones and tablets, based on which areas are easiest to reach given normal ways of holding the devices. He suggest to put frequent actions in the “easy” zones and infrequent or dangerous ones in the “reach” zones.

Tablet Activity Zones

Mobile Phone Activity Zones


Examples and Prototypes Using touch interaction

Tags and Microsoft Surface

Tag Map Magnifier for Microsoft Surface

This prototype demonstrates an application built for the Microsoft Surface using commonly used objects with Surface-specific tags to act as a virtual map magnifier. The ability of Microsoft Surface to recognize and respond to tagged objects is described here ((http://msdn.microsoft.com/en-us/library/ee804965%28v=surface.10%29.aspx)).


atrac multitouch table tells you about your Nespresso

The atracTable tells you about the type of Nespresso you have ordered including the geography it was grown in.

Microsoft Milan

Physical Controls for mobile devices

dsLabs introduced a DIY hardware knob that can be used as a virtual, physical control on the multitouch-capable touchscreen devices. It makes use of conductive tape to fake two finger touch.

Technology Comparison Chart

Multi-touch

  1. http://en.wikipedia.org/wiki/Multi-touch
  2. Multi-Touch Systems that I Have Known and Loved

Touch APIs & SDKs

  1. TouchKit (OSS)
  2. Sparsh-UI (OSS/GNU)
  3. Nokia QT :: 1, ()
  4. GestureWorks – Flash Multitouch SDK (Commercial)
  5. TUIO (OSS) !
  6. Mirosoft Surface
  7. Natural User Interface Snowflake (Commercial)
  8. Tbeta
  9. MTmini

Hardware

  1. 3M™ Multi-touch Developer Kit
  2. Dell Latitude XT2 Tablet PC
  3. HP TouchSmart tx2z series
  4. Ideum MT table
  5. Acer T230H
  6. Resistive touch-screens :: 1, 2
  7. Capacitive touch-screens :: 1, 2

Further Reading: Other touch/tangible applications and technologies:

Mitsubishi “3D touch panel” display

This technology is based on capacitive touch panel and promises to detect the distance between a finger and the touch panel to allow for new interface options.

Mitsubishi Electric calls the touch panel ” 3D touch panel” because it can determine not only the x- and y- (plane) coordinates of a finger but also its z- (normal direction) coordinate. The prototype has a 5.7-inch screen with a resolution of 640 x 480 pixels (VGA).

mitsubish3d-display

A detailed review can be found here: techOn

Flexible Display

Impress

final_1

Impress is the deliverance of the touch screen from its technical stiffness, coldness and rigidity. It breaks the distance in the relationship of human and technology, because it is not any longer the user which is subjected to technology, but in this case the display itself has to cave in to the human. Impress is a chance of approach of user and technology, above all, from technology.

It is a matter of a flexible display consisting of foam and force sensors which is deformable and feels pleasantly soft. Impress works with the parameters position and time like other touch screens as well, but in addition to that, it reacts, above all, on the intensity of pressure.

final_3

The user can merge in and collaborate with technology more than ever. He can squeeze out information and fly through rooms, he can form three-dimensional and put objects in motion by deforming the surface. Four short applications allow an insight into an absolutely new world of deeply sensitive and intuitive interaction possibilities.

Created with Arduino and Processing.

final_4

Modeling with a flexible display – [[prototype]]

Virtual Gravity

Virtual gravity is an interface between digital and analog world. With the aid of analog carriers, virtual terms can be taken up and transported from a loading screen to an analog scale. The importance and popularity of these terms, outputted as a virtual weight, can be weighed physically and be compared. Therefore impalpable, digital data get an actual physical existence and become a sensually tangible experience.

Diploma project by Silke Hilsing: silkehilsing.de
More information: virtualgravity.de

Sound design: andreabassi.org

Microsoft Mouse 2.0 (multi-touch mouse concepts)

Microsoft demonstrates Mouse 2.0 and their research to enhance input devices.

Publication: Mouse 2.0: Multi-touch Meets the Mouse (pdf)

Custom DIY touch interfaces

Tactilus Multi-touch Screen

Tactilus Moultitouch Screen, Discussion, Christopher’s screen plans

  1. PS3 Camera Dissection
http://www.youtube.com/watch?v=txX5rXtY7CY