Rob Lane looks at what multi-touch technology is currently available and where the market is heading.
We human beings are a tactile species; we like to touch. And those of you with children will appreciate how touching is as much a part of the early learning experience as crawling, crying, eating and filling nappies. In fact, almost as soon as a child can sit, he or she is ready to engage with your smartphone or tablet – it’s as natural to them as touching your face or playing with the dangly toys on their baby gym. Once at school, of course, engaging with displays becomes the norm for work and play; this is bad luck for the majority of us with non-touch PCs/Macs at home, as Junior can’t help touching, usually becoming increasingly frustrated as he or she realises this has no effect other than angering mum or dad.
For a ‘new’ market, multi-touch sure has been slow to get off the mark. IBM started developing the first touchscreens in the late 1960s, and in 1972 Control Data introduced its PLATO IV educational computer, employing user-interface single-touch points in a 16 x 16 array. Danish electronics engineer Bent Stumpe developed a capacitance touchscreen in the same year, and this was later developed at CERN in 1977 as a new human-machine interface for the control room of the Super Proton Synchrotron Particle Accelerator.
There was a further advance in 1991 when Pierre Wellner published a paper on his multi-touch Digital Desk, which supported multi-finger and pinching motions. Then, between 1999 and 2005, Fingerworks developed its Touchstream keyboards and iGesture pad.
In the meantime, Microsoft started work on its capacitance SUR 40 Surface table-top platform (renamed PixelSense in 2012 to free-up the Surface Moniker for its notebooks) in 2001, while – at the same time – Mitsubishi was developing its multi-touch, multi-user system, DiamondTouch, which went commercial in 2008 – after which the commercial market really started to open up.
In 2015, multi-touch is a growing, highly competitive market, with a variety of players looking to influence and dominate the market. So what are the various flavours of multi-touch currently available?
Old-style surface capacitance has been superseded by projected capacitance (PC). MultiTaction’s unique camera-based Computer Vision Through Screen (CVTS) system is more expensive, but is arguably more flexible and responsive. PQ Labs offers IR overlay frames and Flatfrog supplies touchscreens utilising its proprietary optical system. Other less mainstream or now moribund solutions include: resistive; surface acoustic wave; acoustic pulse recognition; embedded; and force sensing.
Displax’s main PC product is Skin MultiTouch, a transparent foil that can be applied to non-conductive surfaces to transform them into multi-touch, and the company has also launched its Oqtopus table and Pad upright. 3M supplies a variety of PC multi-touch solutions, including the 46in table display and 32in table/wall display – both offering 60 simultaneous touches.
Zytronic utilises PC with its MPCT sensing solution, and supports displays of up to 85in with 40 simultaneous touchpoints, while Eyefactive supplies PC touch frames (30-84in).
PQ Lab’s G4S IR frames are available for integration or as overlays for existing monitors and incorporate PQ Labs’ fourth-generation Cell Imaging Technology with a sample rate of 200fps. Touch points vary: two, six, 12 and 32 (unlimited).
MultiTouch unlimited-touch MultiTaction displays are available in 55in (ultra thin bezel) and 55in and 42in stackable. There are also a variety of 42in and 55in embedded models (Windows 7 and 8; Linux). MultiTouch also offers two sizes of turnkey MultiTaction iWall, comprising 12 x 55in displays and 8 x 55in displays.
radarTOUCH, from Lang, offers something completely different – something that blurs the lines between multi-touch and gesture interaction. The radarTOUCH ‘measuring device’ is a small box that projects an invisible rotating laser to measure the distance of all objects in its 2D environment. Software is a Java program that intercepts the data detected by the laser and sends to a PC via an Ethernet connection. Marry radarTOUCH to a display or displays and you can make the surface operate as if multi-touch – the difference being that you don’t actually have to touch the surfaces.
With other gesture control applications looking to encroach into multi-touch territory (for example, Oblong’s Mezzanine system and Elliptic Labs’ ultrasonic tech for smartphones), what does the future hold for multi-touch technology? I foresee the technologies being used together more frequently to begin with, with less reliance on touch going forward. But, given our instinctive desire to use our fingers, it’s unlikely that gesture control will ever completely trump multi-touch.