Practical Formal Methods in Human–Computer Interaction
Practical Formal Methods in Human–Computer Interaction
Abstract and Keywords
This chapter explores how precise formal methods can be used effectively and practically in interaction design. The term ‘formal methods’ in computer science refers to a suite of techniques drawing on mathematical notions of sets, logic, and functions or precise diagrammatic notations, most of which are currently primarily focused on safety-critical applications in the aerospace or nuclear industries. While research into broader use of these methods could be regarded as a theoretical interest, the early development of formal methods was driven as much by practical considerations as theory. This chapter features two case studies on formal notations and their use in areas of practical interaction design beyond safety-critical applications, as well as understood, used, and appropriated by clients and designers who have no formal training or expertise. Each offers specific notations and techniques to the reader and also explores more general lessons for creating practical formal methods for HCI.
One might feel that the very association of the terms ‘formal methods’ and ‘human’ is an oxymoron; how can anything that is formal make any sense when faced with the complexity and richness of human experience? However, little could be more formal than computer code, so in that sense everything in Human–Computer Interaction (HCI) is formal. Indeed, all the chapters in this book are about practical formal methods, in the sense that they involve a form of mathematical or symbolic manipulation. From Fitts’ Law to statistical analysis of experimental results, mathematics is pervasive in HCI.
However, in computer science ‘formal methods’ has come to refer to a very specific set of techniques. Some of these are symbolic or textual based on sets and logic, or algebraic representations. Others are more diagrammatic. All try to specify some aspect of a system in precise detail in order to clarify thinking, perform some sort of analysis, or communicate unambiguously between stakeholders across the design and engineering process.
The use of such methods is usually advocated in order to ensure the correctness, or more generally, the quality of computer systems. However, they are also usually regarded as requiring too much expertise and effort for day-to-day use, being principally applied in safety-critical areas outside academia. Similarly, in HCI, even when not dismissed out of hand, the principal research and use of formal methods is in safety-critical areas such as aerospace, the nuclear industry, and medical devices.
However, this chapter demonstrates that, in contrast to this perception, formal methods can be used effectively in a wider range of applications for the specification or understanding of user interfaces and devices. Examples show that, with appropriate choice and use, formal methods can be used to allow faster development and turnaround, and be understood by those without mathematical or computational background.
We will use two driving examples. One is from many years ago and concerns the development of simple transaction-based interactive information systems. The other is more recent (p.188) and relates to the design of smart-devices where both physical form and digital experience need to work together.
In both cases, a level of appropriation is central: the adaptation of elements of specific formal methods so that their power and precision is addressed at specific needs, while being as lightweight and flexible as possible in other areas. Because of this, the chapter does not introduce a particular fixed set of notations or methods, but rather, through the examples, demonstrates a number of heuristics for selecting and adapting formal methods. This said, it is hoped that the specific methods used in the two examples may also themselves be of direct use.
The rest of this chapter is structured as follows. First, Section 7.2 looks at some of the range of formal methods used in computer science in general, and then at the history and current state of the use of formal methods in HCI. Sections 7.3 and 7.4 look at the two examples, for each introducing a specific problem, the way formalism was used to address it, and then the lessons exposed by each. Finally, we bring together these lessons and also look at emerging issues in HCI where appropriate formalisms may be useful or, in some cases, essential for both researcher and practitioner as we attempt to address a changing world.
7.2 Being Formal
7.2.1 Formalism in Computing: Theoretical and Practical
Formal methods in computer science is primarily about specifying aspects of computer systems in some mathematical or symbolic form, in order to better communicate, understand, or verify their behaviour and properties. The roots of this can be found in the work of the early giants of computer science, for example, Dykstra’s use of assertions to prove program correctness.
Although they have diverged somewhat in recent years, formal theory has been part of practical computing since the beginning, as Alan Turing’s early contributions make very clear. However, currently, theoreticians tend to deal more with abstract questions about the nature of computation, and the formal methods community is more connected with software engineering and developing formally based tools, notations, and methods that could, in principle, form part of system development processes. Indeed, while the particular notations used often get named and become the ‘brand’, an important issue is always how these fit into broader engineering processes.
One class of formal methods is concerned with specifying the data structures of the state of the system and the way these change during (typically atomic) actions. This form of specification is almost always textual using sets, logic, and functions, although in different mixes depending on the particular notation and methods. The most widely known of these is probably Z (Spivey, 1992), which uses a codified and strongly typed form of standard set theory (Figure 7.1). Note that mathematicians are often fairly free in their use of notation, modifying it to fit the particular theorem or problem area, but computing prefers more fixed notations, especially if these are to be subject to automatic analysis or verification.
The other use of formal methods focuses more on the overall order in which actions are performed. Early examples of this include state transition networks (STNs) and flowcharts for more sequential operations, both of which will be seen later in this chapter. STNs are (p.189) used extensively, including in the specification of communication protocols where it is essential that equipment produced by different manufacturers works together. Note that STNs are effectively a diagrammatic form of finite state machines, described in detail in Chapter 8.
Figure 7.2 shows an example of an STN specifying the behaviour of a caching protocol. The circles represent states, and the arrows potential transitions between the states. Note how the arcs, drawn differently, represent different kinds of transitions (in this case, whether they are initiated by activity on the system bus or internally by a processor).
Simple STNs only allow sequential, one state at a time, processes, but variants, such as Harel Statecharts (Harel, 1987), have been developed that allow concurrent activity, alongside a number of notations specialized for this, including Petri nets and process algebras (mostly variants of CCS and CSP). In some cases, temporal logics (that is, logics that reason about time) are used, primarily as a way to specify properties that must be satisfied by a system fully specified using other formalisms.
In fact, this last point demonstrates two slightly different uses of formal notations:
(i) specification of desired properties of a system; and
(ii) precise and detailed specification of a particular system.
Effectively, the first is about formalizing requirements and the second formalizing the detailed (technical) design. These can be used alone, i.e., (i) can be used on its own to create test cases or simply clarify requirements; (ii) can be used to analyse the behaviour or to verify that code does what the specification says it should. Alternatively, the two kinds may be used together where the detailed specification (ii) is verified to fulfil the formal requirements (i). Where using both, these may use the same formal notation, or two different ones, as noted with the use of temporal logics.
Simpler diagrammatic formalisms have always been well used, and many, including Statecharts, have found their way into Unified Modeling Language (UML) (Booch et al., 1999). There have been classic success stories of formal methods used in industrial practice dating back many years, notably, a large-scale project to specify large portions of IBM Customer Information Control Systems (CICS) in Z (Freitas et al., 2009). However, on the whole, more complex formal methods tend to require too much expertise and are perceived as taking too much effort. Hence, adoption is very low except for a few specialized safety-critical fields where the additional effort can be cost effective.
One suggestion to deal with this low adoption has been to develop domain-specific notations, designed explicitly for a very narrow range of applications using representations (p.190) that are more likely to be comprehensible by domain experts, but which are built over or can be connected back to some form of semantic core that allows analysis and reasoning (Polak, 2002). The use of formal methods in HCI can be seen as one such specialization, as can the examples later in this chapter.
7.2.2 Why Formal Methods in HCI—History and Utility?
The use of formal methods in HCI dates back to the earliest days of HCI itself as a defined discipline. Much of this use was driven by strong practical concerns. Reisner’s (1981) use of Backus-Naur form (BNF) to specify what we would now call the dialogue of interactive systems was driven by the desire to understand the properties and potential problems of alternative designs prior to the expensive process of creating systems prototypes. The choice of BNF was suggested by the successful use of formal grammars elsewhere in computing, notably in compiler construction, which itself was inspired by the lexical–syntactic–semantic decomposition from linguistics.
(p.191) Early designers of User Interface Management Systems wanted to be able to create systems more rapidly and reliably, but found the user interface code increasingly complex. They also turned to dialogue specifications using a number of formalisms from grammars to production rules, and turned the linguistic triad into the distinction between presentation, dialogue and (programmer’s interface to) functionality, embodied in the Seeheim Model (Pfaff and Hagen, 1985). Although the Seeheim Model itself is now all but forgotten, the presentation–dialogue–functionality distinction is explicit or implicit in many aspects of user interface design, for example, in the separation of web interface style information into the Cascading Style Sheet (CSS) (presentation layer).
In parallel, arising from cognitive science, was a growing interest in how the same or similar formalisms could be used to specify the way humans cope with interactions. Sometimes this involved models primarily of the human, Card, Moran, and Newells’s (1983) Model Human Processor (MHP) being the most well-known example; and sometimes modelling both human and system in order to understand the interactions, including Cognitive Complexity Theory (Kieras and Polson, 1985), which used production rules as a cognitive model alongside a state model of the system in order to explore issues such as closure (the way one sometimes forgets ‘finishing off’ actions, such as picking up a receipt, when one’s primary goal, say buying a ticket, is completed).
The interest in formal methods more broadly in computer science also turned to user interfaces, both because of their complexity, and also because any systems specified ultimately ended in some form of user interface. Sufrin’s (1982) specification of a display editor is the first complete example of this using Z to specify the effects for editing actions (such as insert a character) on the internal state of an editor (text, cursor position, etc.).
The ‘York Approach’ arose primarily from the latter stream, but with a particular focus on specifying and analysing generic properties of user interfaces (such as the ability to predict and observe the system’s behaviour when you perform an action). The idea of this was to be able to talk about these properties independently of the specific system that embodied them. The most well-known output of this was the PIE model (Dix and Runciman, 1985; Dix, 1991), which took a black-box view of the interactive system, describing only the inputs (e.g., keystrokes, mouse movements) and outputs (e.g., display, printouts).
The complexity of human interaction and user interface systems always limited the kinds of analysis possible, and this stream of work, rather like Seeheim, is now represented most strongly in the vocabulary of formal system properties developed during that time (observability, predictability, reversibility, etc.).
However, the strong algebraic properties of undo made it particularly suitable for abstract analysis, with early results (Dix and Runciman, 1985; Dix, 1991) showing that it was fundamentally impossible to have any undo system where the undo button perfectly reversed any action, including itself, which was a common claim at the time. This work was later extended, first to multi-user undo (Abowd and Dix, 1992) exposing potential problems and solutions before the first such systems were built, and then later using category theory (Mancini, 1997) to prove that any undo system that obeyed fairly simple properties fell into one of two camps: flip undo (where one previous state is stored and undo toggles between this and the most recent state), and stack undo (where undo/redo traverses back and forth through previous states, but any action freezes the process and discards actions back to that point).
(p.192) During this same period there was also work on formally based architectural description such as PAC (Coutaz, 1987) and ARCH-Slinky (Bass et al., 1991), computational abstractions for construction of particular kinds of user interface such as Dynamic Pointers (Dix 1991, 1995), and a number of dedicated user interface formalisms that are still used today, including Interactive Cooperative Object, or ICO (Navarre et al. 2009), a variant of Petri Nets, and ConcurTaskTrees, or CTT (Paterno, 1999), a more formally based variant of hierarchical task analysis (Shepherd, 1989).
As a final word in this section, note again that while some of this early work arose from more theoretical concerns, the majority had roots in practical problems that arose when either trying to understand or build interactive user interfaces.
More information about the early development of this area can be found in a number of monographs and collections (Thimbleby and Harrison, 1990; Dix, 1991; Gram and Cockton, 1996; Palanque and Paterno, 1997; Paterno, 1999; Weyers et al., 2017).
7.2.3 Where We Are Now: Current Use and Research in HCI
Another recently published Springer volume, The Handbook of Formal Methods in Human-Computer Interaction (Weyers et al., 2017), captures the state of the art in this area, so this section will summarize this very briefly and the interested reader is referred to the full volume for further details.
While there is some work on state modelling, including continued use of Z (e.g., Bowen and Reeves, 2017), the largest volume of current work is on aspects of dialogue modelling, and indeed, the extended examples in this volume are also forms of dialogue modelling, albeit addressing slightly different levels. Some are based on longstanding techniques such as StateCharts, ICO (see Chapter 9), and CTT, although labelled transition systems (LTS) have also become common, either used directly or derived automatically from other notations. LTS are a form of state transition network where the arcs carry labels, usually from a finite set.
Part of the reason for the popularity of LTS, at least as a target notion, is that there are a number of tools that allow verification of properties of LTS. Automatic verification has always been a stated aim of formal methods, and in some cases (for example, in the analysis of some consumer devices and medical instruments (Thimbleby and Gow, 2008)), it has been possible to exhaustively analyse properties of the graph of possible dialogue states. However, the combinatorial state space explosion has often made this form of exhaustive analysis impossible. Developments in tool support for automatic verification, developed for other application domains, have increasingly been used to verify user interface properties (e.g., Masci et al., 2014; Bolton and Bass, 2017; Harrison et al., 2017).
A different form of tool support is found in model-based user interfaces (e.g., Manca et al., 2017). In this context, model-based is often used in a stronger sense than the models in UML, as the models are expected to be suitable for automatic transformation or execution. The models may be based on reverse engineering of existing systems or developed by hand, but are then used to automatically generate (for example, to enable plasticity) interfaces that change their appearance and behaviour depending on the device and user characteristics (e.g., Coutaz, 2010). As well as representing a long-term research effort, summarized in (p.193) Meixner et al. (2011), it has also led to a W3C standardization effort (Meixner et al., 2014).
Another development has been the work on domain specific notations (DSNs). These are notations designed for a specific application domain, e.g., chemical plants. The visualizations and vocabulary are created to more easily express the particular requirements and properties of the domain, and also be meaningful to domain experts. Chapter 8 argues strongly for the need for appropriate languages so that coders and designers can more easily express their intentions and understand the computer’s semantics, hence improving both productivity and reliability.
There are elements of this in early work, for example, dynamic pointers (Dix 1991, 1995), which were designed to specify and construct systems that included complex changing structures and required locations or parts to be marked (e.g., cursors for editing, current scroll location in text). This was effectively a bespoke, handcrafted DSN. More recent work has focused on creating abstractions, tools, and methods that make it easier for a formal expert to construct a DSN (Weyers, 2017; Van Mierlo et al, 2017).
7.3 In Practice: Transaction-based Information Systems
As we have seen in the preceding section, there are a range of methods and domains being studied by formal methods in HCI. However, it is still the case, as for formal methods in general, that the vast majority of applications are in safety-critical areas such as aerospace (e.g., Chapter 9), the nuclear industry, and medical devices (e.g., Chapter 8).
This section looks at two case studies, spanning over 30 years, that demonstrate successful applications of formal methods outside the safety-critical area.
7.3.1 The Problem
This first case study dates from the early 1980s and the design and development of interactive transaction processing systems (for more details of this case study see Dix, 2002). For those unfamiliar with this kind of system, the easiest parallel is a web server, except each application effectively functions like a dedicated web server in order to offer high throughput. The particular systems were ICL-based, but similar to the better-known IBM CICS, which is still heavily used today for cashier terminals in shops and banks and as the backend of enterprise-scale web services, albeit currently more often programmed in Java than COBOL.
The author was working in a local government IT department, which used ICL/TP for interactive information systems. The end-users were principally council staff. They interacted with the information systems using semi-smart terminals, very like a web browser that can display simple web forms. Just like with a web-form interface, the user would enter some values into a form, for example, the name of a person to search for in a personnel database, send the form, and the system would respond with a screen of information.
There were known problems with the systems deployed in the council at the time, for example, if the response were a paged list, occasionally if two users submitted queries, one (p.194) might get a second page corresponding to the second user’s query. While this was annoying, it was infrequent enough to be acceptable for purely information access. However, the author was tasked with creating the council’s first systems to also allow interactive update of data, and so it would be unacceptable to have similar issues.
The reason for this occasional bugginess was evident in the code of the existing applications, which was spaghetti-like due to complex decisions primarily aimed at working out the context of a transaction request. Similar code can indeed often be seen today in poorly structured web-server code that tries to interpret input from different sources, checking for the existence of variables to distinguish a submit from a cancel on a web form. In fairness to the programmers, this was very early in the development of such systems, and also the individual application did a lot of the work now done by web servers such as Apache, marshalling variables and directing requests to the appropriate code.
7.3.2 Technical Solution
Part of the solution was technical. A small non-editable identifier was placed on each form, which then functioned rather like the ‘Referrer’ field in a web transaction, telling the application what screen the request had come from. In addition, the ICL/TP system allowed information to be attached to each terminal, rather like a web cookie or session variable, which was used to keep track of context for each application transaction.
The first of these eased some of the spaghetti-code problems as it was now simply a matter of checking the non-editable field, and the second eased the odd problems on list-screens, which had been due to state being ‘mixed up’ between different users’ transactions.
These technical fixes made the code cleaner and more reliable; however, it was still effectively event-based and hard to interpret and construct.
In order to deal with this, a variant of flowcharts were used.
7.3.3 Standard Formal Notation: Flowcharts
Flowcharts had been heavily used for specifying computer program behaviour since the 1960s, with multiple standards including BSI/ISO (BSI, 1987), and ECMA (1966), who used to produce useful plastic templates. The use of flowcharts for computer program specification has been largely superseded, and the ECMA standard withdrawn, but flowchart-like activity diagrams are still used in UML for workflow modelling.
Flowcharts continue to be used extensively in other non-computing areas as evidenced by the number of desktop and web-based flowchart-drawing applications, the presence in PowerPoint symbol gallery, and indeed, numerous Facebook memes, like the example in Figure 7.3, except normally far more elaborate. Note also that the fact that they are effective social media memes is due to the ease with which non-experts can read them and understand what they represent.
(p.195) Flowcharts developed rich vocabularies of shapes denoting different forms of action, but the basic elements are rectangular blocks representing activities, diamonds representing choices, and arrows between them representing control flow, one activity following another (see Figure 7.3).
Flowcharts have many limitations, not least in that they only deal with sequential activity, and do not deal with concurrency. There are extensions, such as UML activity diagrams, which add features, but part of the ongoing popularity of flowcharts is undoubtedly the simplicity that makes them work so well as a communication tool.
7.3.4 Flowcharts for User Interactions
While the normal use for flowcharts at the time was for describing the internal algorithms of a computer program, here the issue was to be able to describe the overall flow of user interaction. Flowcharts were therefore used to describe and specify the structure of interactions between the user and the computer system; that is, they were used as a user interaction specification notation.
Figure 7.4 shows an example of this for a standard delete dialogue. The diagram has two main kinds of blocks distinguishing user activity at the screen and backend processing.
The rectangles with corner inserts represent the on-screen forms that the user has to fill in. The shape was meant to suggest a screen, and a shortened version of the screen contents included in order to make them self-explanatory, rather like wireframe sketching tools today such as Balsamiq (https://balsamiq.com/).
(p.196) The lozenge shapes represented computer activity. This was chosen as it was available in flowchart templates, and was part way between the diamond (choice) and rectangular block (actions), as typical computer processing combines the two. Under the ECMA standard this shape had a specified meaning, but it was rarely used and so deemed unlikely to cause confusion. Quite complex internal processing would be represented as a single block, with the internal breakdown only being shown if there was an external difference (e.g., the record only being deleted if the confirm response was ‘Y’).
The small tape symbols were used to represent interactions with stored data—recall that this was the early 1980s, when personal computers used cassette tapes for storage, and, in mainframe computer centres, tapes were still used alongside disk storage for large data files.
Finally, note that each user and computer activity has an identifier at the top right. The computer ones corresponded to internal labels in the code and the user ones were displayed as the non-editable id at the top right of the display.
It is also worth noting what the user interaction flow charts did not include. There was little description of the internal code, for example, the way records would be read from disk. This was because this level of detail was ‘standard programming’, the thing the developer (p.197) could easily do anyway, and which was pretty much self-documenting in the code. Neither did it give a complete description of the contents of the screens, merely enough to make the intention clear; again, it was often fairly obvious which fields from a record should be displayed and how, but if not, then this would be described separately. The formalism focused on doing the things that were hard to understand from the code alone.
These flowcharts were used in several ways.
• as a design tool, to help think about the flow of user interactions.
• as a way to communicate with clients (other departments in the council) on the behaviour of the planned systems, or alongside demos of constructed systems.
• to facilitate rapid coding; although the flowcharts were simply drawn on paper, there was a semi-automatic process of converting the diagrams into boilerplate code. Crucially this meant that turnaround times for changes, which were often in the order of weeks for other systems, could be accomplished in hours.
• As a structure for testing, as it was easy to verify that one had tried each path through the interaction.
• as aid to debugging, as users could easily report the id of the screens where problems were occurring.
In short, this led to systems that were more reliable, were more easily understood by the clients, and were produced more than ten times faster than with conventional methods at the time.
It was only some years later that it became apparent just how unusual it is to see use of formal methods that was so clearly effective. In analysing this in a short report (Dix, 2002) on which the previous description is based, a number of features were identified that contributed to this success:
• useful—the formalism addressed a real problem, not simply used for its own sake, or applied to a ‘toy’ problem. Often, formalisms are proposed because of internal properties, but here it is the problem that drove the formalism.
• appropriate—there was no more detail than needed—what was not included was as important as what was. Often formal notations force you to work at a level of detail that is not useful, which both increases the effort needed and may even obfuscate, especially for non-experts.
• communication—the images of miniature display screens and clear flow meant that it was comprehensible to developers and clients alike. The purpose of formal methods has often been phrased in terms of its ability to communicate unambiguously, but if the precision gets in the way of comprehension, then it is in vain.
• complementary—the formalism used a different paradigm (sequential) than the implementation (event driven). There is often a temptation to match formalism (p.198) and implementation (e.g., both object based), which may help verification, but this means that the things difficult to understand in the code are also difficult to understand in the specification.
• fast payback—it was quicker to produce applications (by at least 1000 per cent). It is often argued that getting the specification right saves time in the long term as there are fewer bugs, but often at the cost of a long lead time, making it hard to assess progress. The lightweight and incremental nature of the method allowed rapid bang-for-buck, useful for both developer motivation and progress monitoring.
• responsive—there was also rapid turnaround of changes. Often, heavy specification-based methods can mean that change is costly. This is justified, if the time spent at specification means you have a ‘right first time’ design, but for the user interface, we know that it is only when a prototype is available that users begin to understand what they really need.
• reliability—the clear boilerplate code was less error-prone. While the transformation from diagram to code was not automated, the hand process was straightforward, and the reuse of code fragments due to the boilerplate process further increased both readability and reliability.
• quality—it was easy to establish a test cycle due to the labelling, and to ensure that all paths were well tested.
• maintenance—the unique ids made it easy to relate bugs or requests for enhancements back to the specification and code.
In summary, the formalism was used to fulfil a purpose, and was neither precious nor purist.
7.4 In Practice—Modelling Physical Interaction
The second case study is set more than 25 years later, in the context of DEPtH, a cross-disciplinary project investigating the way digital and physical design interact (for more details of this case study see Dix et al., 2009; Dix and Ghazali 2017).1
7.4.1 The Problem
In the 1980s, the devices used to interact were largely determined by the size and shape that was technically possible. Now personal devices from smart watches to kitchen appliances embody aspects of digital behaviour set in a wide range of physical form factors. Effective (p.199) design is not just about the abstract flow of actions and information, but the way these are realized in pressable buttons or strokeable screens.
Furthermore, maker and DIY electronics communities have grown across the world, enabled by affordable 3D printers, open-sourced hardware such as Arduino and RaspberryPi, and digital fabrication facilities, such as FabLabs, in most cities. This means that the design of custom electronics devices has moved from the large scale R&D lab to the street. Items produced can be frivolous, but can include prosthetics (Eveleth, 2015), reconstructive surgery (Bibb et al., 2015), community sensing (Balestrini et al., 2017), and prototypes for large-scale production through platforms such as Kickstarter. We clearly need to understand user interaction with these devices and find ways to make the resulting products safe, usable, and enjoyable.
There are many ways to describe the internal digital behaviour of such devices. DIY-end users may use graphical systems such as variants of Scratch, or data flow-based systems; professionals may use appropriate UML models, and researchers may use various user-interface formalisms, as discussed in section 7.2.1. However, all start with some form of abstract commands (such as ‘up button pressed’) or numerical sensor input, as it is available once it hits the computer system.
Similarly, there are many ways to describe the 3D form itself including standard formats for point clouds, volume and surface models; affordable devices to scan existing objects or clay formed models into these formats; and tools to help design 3D shapes including the complexities of how these need to be supported during printing by different kinds of devices. However, these focus principally on the static physical form, with occasional features to make it easy, for example, to ensure that doors open freely.
The gap between the two is the need to describe: (i) the way a physical object has interaction potential in and of itself, before it is connected to digital internals (buttons can be pressed, knobs turned, a small device turned over in one’s hands); and (ii) the way this intrinsic physical interaction potential is mapped onto the digital functionality.
This issue has been considered in an informal way within the UI and design community, not least in the literature on affordance (Gibson, 1979; Gaver, 1991; Norman, 1999), where issues such as the appropriate placing and ordering of light switches and door handles has led to a generation of HCI students who are now unable to open a door without confusion. At a semi-formal level Wensveen et al.’s (2004) Interaction Frogger analyses some of the properties that lead to effective physical interaction.
Within the more formal and engineering literature in HCI, the vast majority of work has been at the abstract command level, including virtually all the chapters in a recent state-of-the-art book (Weyers et al., 2017), with a small amount of research in studying status–event analysis (Dix, 1991; Dix and Abowd, 1996) and continuous interaction (Massink et al., 1999; Wüthrich, 1999; Willans and Harrison, 2000; Smith, 2006).
There are some specialized exceptions. Eslambolchilar (2006) studied the cybernetics of control of mobile devices, taking into account both the device sensors and feedback, and the human physical motor system. Zhou et al. (2014) have looked at detailed pressure-movement characteristics of buttons, which is especially important when a button has to be easy to engage when an emergency arises, but hard to press accidentally (such as (p.200) emergency cut-off, or fire alarm). Thimbleby (2007) has also modelled the physical layout of buttons on a VCR, to investigate whether taking into account Fitts’ Law timings for inter-button movement was helpful in assessing the overall completion time of tasks, and similar techniques can also be seen in Chapter 9.
In order to address this gap, physigrams were developed, a variant of state-transition networks (as discussed in Section 7.2.1), but focused on the physical interactions.
7.4.2 Physigrams: STNs for the Device Unplugged
Figure 7.5 shows the simplest example of a physigram, the states of a simple on/off switch. It has two states, UP and DOWN, which the user can alter by pushing the switch up or down. At first this looks like a typical user interface dialogue specification. However, this is not about the state of the thing controlled by the switch (light, heating, computer power) with abstract on/off actions, but rather about the actual physical switch.
Imagine unscrewing the switch from the wall, removing the wires behind, and then screwing it back to the wall. It now controls nothing, but still has interaction potential—you can flick the switch up and down—and if you look at it, you can see there are two (stable) states: one with the switch pointing up, and one with the switch pointing down. The physigram captures this physical interaction potential of the device ‘unplugged’ from the electronic or digital functionality that it normally controls.
For a more complex example of a device unplugged, imagine giving a TV remote control to a small child to play with. You have removed the batteries; so that the child cannot change the actual channel, but they can still play with it in their hands, and press the various buttons.
Of course, the relationship between this physical interaction potential and the thing it controls is crucial. Figure 7.6 shows the physigram of the switch, on the left, connected to the states of the controlled electronic system (a light bulb), on the right.
In this case, the mapping between physical device states and the states of the underlying system are one-to-one, the simplest mapping and very powerful when possible, especially if there is any delay (as with a fluorescent tube) or the thing controlled is not visible (p.201) (e.g., outside light controlled from inside). In these cases, assuming there is no fault, the state of the light can be observed from the state of the switch.
More generally, this one-to-one mapping is not always possible, and in many cases the physical device controls some form of transition. In these cases, the link between the physical and logical sides of the system ends up more like the classic user specification with abstract commands, except in this case, we have a model of how the commands are produced, not just how they are used. In some ways, the ICO model of input device drivers in Chapter 9 fulfils a role similar to physigrams, albeit less focused on the actual physical behaviour. However, the way these are annotated with events and event parameters shows one way in which device behaviour can be linked to more abstract system models.
This connection between the physigram and underlying system states is very important, but for the purposes of the current discussion, we will focus entirely on the physigram itself, as it is the more novel aspect.
(p.202) Figure 7.7 is a button with ‘bounce-back’. These are common in electronic devices; indeed, there are seventy-nine on the laptop keyboard being used to type this chapter. Whereas most light switches stay in the position you move them to, when you press a key on a keyboard, it bounces back up as soon as you take your finger off the key. Notice that the IN state has a dashed line around it, which shows that it is a temporary tension state—unstable without pressure being applied. The arrow from OUT to IN is smooth, denoting a user action of pressing in, but the arrow from IN to OUT is a ‘lightning bolt’ arrow, denoting the physical bounce-back of the button.
In fact, even a typical up/down light switch has a small amount of bounce-back. If the switch is up and you press very lightly down, it moves a little, but if you release it before it gets to halfway it would pop back to its original position. When it gets to the halfway position it will suddenly ‘give’ snapping to the down position.
This can be represented by adding part-in/part-out states with bounce-back transitions, as shown in Figure 7.8(i). This detailed representation would be useful if the digital system had some sort of response to the partly moved states (perhaps an audio feedback to say what will get turned on/off). However, mostly it is sufficient to know that they have this feel for the user, and so the shorthand in figure 7.8(ii) was introduced; the ‘bounce’ marks where the arrow exits the state are intended to represent this small amount of resistance, bounce-back, and give.
Tables 7.1 and 7.2 summarize the symbols we have seen so far for states and transitions respectively. A more complete list can be found in Dix and Ghazali (2017) and online at http://physicality.org/physigrams/.
Table 7.1. Physigram states (from Dix and Ghazali, 2017)
Table 7.2. Physigram transitions (from Dix and Ghazali, 2017)
So far, the analysis described was all performed by the computing members of the DEPtH team. It was then handed over to the product designers on the project, who were, at the time, looking at prototypes of three variants of a media player: one with a protruding knob, one with a flat dial, and one with a round touch-sensitive pad, each used to control a hierarchical menu system with maximum seven items at each level. In each design, a press or push would activate the currently selected menu item. The product designers created physigrams for each design (see Figure 7.9).
The first thing to note about the designers’ physigrams is the aesthetics. Whereas the physigrams shown previously were all flat, two-dimensional representations, the product designers used three dimensions, with the states drawn on gently curving disks. The third dimension was also used to highlight the correspondence of the position of the dial when pressed down or not pressed.
In designs 7.9(i) and 7.9(ii), the knob or dial was a bounce-back and this is clearly shown (albeit the bounce back arrow slightly hidden), but rather than have seven bounce-back arrows for each state, a single down and up transition is shown, so that UP and DOWN are sort of high-level states (there was an example of something like this in the documents the designers were given).
In contrast, in Figure 7.9(iii), the press down on the touchpad, while having a digital effect, had no tangible feel of being pressed, and there was no sense of being able to still twist it while it was pressed. That is, in Buxton’s (1990) terms, Figure 7.9(iii) is a three-state (0–1–2) device, distinguishing internally not touching (state 0), touching/dragging (state 1), and pressing (state 2), although the press has no tangible feel, and hence does not distinguish Buxton’s states 1 and 2 for the user. To represent this, the designers drew the DOWN state without separate selection sub-states, and the arc shown as a ‘self-transition’ (arrow (p.205) from UP to itself, passing through the DOWN state, to denote that it was something that you know was happening, but without any physical feedback.
There were also detailed differences in the transitions when the knob, dial, or touchpad was rotated (see Figure 7.10). The knob had a resistance to movement and then gave, and hence was drawn with the ‘give’ arrows shown in Figure 7.8(ii). In contrast, the stops on the dial (Figure 7.10(ii)) had some tangible feel, but only slightly, and with no sense of resistance to motion; hence, the stats were drawn as simple transitions. Finally, the touchpad (Figure 7.10(iii)) had no tangible feedback at all. The designers ‘knew’ there were seven stops, but this was not at all apparent from the feel of the device. They felt they wanted to record the seven stops, but drew the transitions as simply passing through these; it would only be through visual or audio feedback that the user would be able to actually tell what was selected.
First, this case study demonstrates the complexity of physical interaction. The specialized notation in the user interaction flowcharts in section 7.3 was developed once on a particular system design and then reused on other systems without any need for additional constructs. In contrast, the computer science side of the DEPtH team had analysed numerous devices in order to create a vocabulary for physigrams, and yet on the first use by others, new cases were found.
Of course, the good thing is that the designers were able to adapt the physigrams to model the unexpected physical properties. This demonstrated two things: comprehensibility, in that the notation was sufficiently comprehensible that they took ownership and felt confident to modify it, even though they were not mathematicians or computer scientists; and openness to appropriation, in that the notation had sufficient flexibility that there were obvious ways to extend or adapt it.
Of particular note in terms of appropriation, the fact the semantics of STNs only use connectivity, not layout, meant that the designers were free to use layout themselves, including 3D layout, in order to create the most meaningful version for human–human communication. This is an example of the appropriation design principle openness to interpretation, identified elsewhere (Dix, 2007): by leaving parts of a system or notation without formal interpretation, this makes it available for rich interpretation by users, or in Green and Petri’s (1996) terms, secondary notation.
(p.206) The downside of this ease of modification is that the semantics were not fixed, which would be problematic if we had wished to perform some sort of automatic analysis. In fact, for this purpose, the clarity for the product designers and the ability to communicate design intentions between designers and computer scientists was more important than having a computer readable notation.
7.5.1 Critical Features
The first case study identified a number of features of interaction flowcharts that led to success: useful, appropriate, communication, complementary, fast payback, responsive, reliability, quality, and maintenance. Some of these, reliability, quality and ease of maintenance, would be familiar to anyone who has read arguments advocating formal methods, precisely where the formality and rigour would be expected to give traction. Others, notably, fast payback and responsiveness, would be more surprising as formal methods are normally seen as cumbersome with long lead times to benefit.
These additional benefits are effectively outcomes of the first four features, which can be summarized in two higher-level considerations, which are shared with the physigrams case study.
Tuned for purpose—The interaction flowcharts were useful and appropriate, i.e., they did what was necessary for a purpose and no more. Recall the way lower-level details were ignored, as these were already understood; only the high-level dialogue was represented and the way this attached to code and screens. Similarly, the physigrams focus on a specific aspect, i.e., the device unplugged and its interaction potential; all the digital features of the device are (initially) ignored. It is interesting to note that, while both specify interaction order and flow, one is at a higher level than most dialogue specifications, and the other a lower level.
Optimized for human consumption—In the case of the interaction flowcharts, the choice of a complementary paradigm (sequential rather than event based) helped the programmer understand aspects that were hard to comprehend in the code itself. Similarly, the physigrams specify an aspect of the device that normally is not represented in the code or standard diagrammatic interaction notations. Also, as noted in the first case study, while formal methods are often advocated as enabling communication, this is normally meant as unambiguous communication between highly trained members of a technical team. In contrast, both the interactive flowcharts and physigrams focused on expressive communication, enabling use by clients and non-formally trained designers respectively.
Early work emphasized the importance of bridging the formality gap (Dix, 1991), i.e., the need to be confident that a formal specification reflects the real world issue it is intended to express. That is, while it is relatively easy to ensure internal validity between formal expressions within a design process, it is external validity that is hard to achieve. It is precisely in establishing this external validity that expressive communication helps.
(p.207) Both tuning and expressive communication were enabled by appropriation. This appropriation happened at two levels. In both case studies, an existing notation (flowcharts, state transition networks) was borrowed and modified to create a new lightweight notation tuned to the specific purpose. In both cases this extension included both features tuned to computational or analytic aims (overall flow of states), but also expressive communication (for example, the miniature screen images in interactive flowcharts and special arrows in physigrams). However, in the second case study we saw a further level of appropriation when the product designers themselves modified physigrams to express aspects beyond those considered by the notation’s creators.
Some of this appropriation is possible because of the flexibility enabled by the uninterpreted aspects of the notations: the layout on the page, and the descriptive text annotating states and transitions; as noted previously, all classic ways to design for appropriation (Dix, 2007). However, in some cases, the appropriation may ‘break’ the formalism; for example, the ‘parallel’ states within the UP and DOWN high-level states in Figure 7.9 are clearly connected for the human viewer, but would not be so if the diagram were interpreted formally.
Achieving this flexible formalism is a hard challenge. The idea of domain specific notations, introduced in section 7.2.3, would certainly help in the first level of appropriation, as the formal expert effectively creates a rich syntactic sugar that allows the domain expert to use the DSN and have it transformed into a lower-level standard notation amenable to automated analysis or transformation.
The second level of appropriation requires something more radical. One option would be to allow semantics by annotation, where the user of the formalism has a lot of flexibility, but must map this back to a more restricted predetermined formalism (which itself may be domain specific). For example, in the product designers’ physigrams in Figure 7.9(i), the arc between the UP and DOWN high-level states could be selected (either by the designers themselves or the formal experts) and then declared to represent the series of arcs between the low-level states. The primary diagram is still as drawn and, for most purposes, used for human–human communication, but if there is any doubt in the meaning, or if automated analysis is needed, then the expanded standard form is accessible. In many ways this is like Knuth’s (1992) concept of literate programming; his WEB system allowed Pascal programmers to write their code and internal documentation in the order that suited them, with additional annotations to say how the fragments of code should be re-ordered and linked in the order needed for the Pascal compiler.
7.5.2 A Growing Need for Formal Methods in HCI
The ‘Gaps and Trends’ chapter of The Handbook of Formal Methods in Human-Computer Interaction (Weyers et al., 2017) suggests a number of challenges for formal methods, many of which are inherited directly from previously identified trends in the use of technology in the world and within HCI (Dix, 2016). Several of these are particularly relevant to this chapter.
First, the increasing use of digital systems in government, transport, and commerce has made the use of computers an essential part of being a citizen in the modern world. HCI (p.208) has always recognized that it is important to take into account a diversity of people: different ages, different abilities, different social and cultural backgrounds. However, digital citizenry means this can no longer be an afterthought. Similarly, mobile access and digitally enabled household appliances mean that users encounter a diversity of devices, sometimes simultaneously, as with second-screen TV watching. Attempting to think through all the permutations of devices and personal characteristics and situations is precisely where automated analysis is helpful.
Looking at the technology itself, both the Internet of Things (IoT) and big data create situations where we encounter complexity though scale due to interactions of myriad small parts. This may lead to feature interactions or emergent effects that are hard to predict. Again, this is an area where more automated and formal analysis is helpful.
However, the above should be set in the context of digital fabrication and maker culture, where larger numbers of people are involved with hardware and software development and customization.
So, the circumstances in the world are making the use of formal methods more important, and yet also mean that those who would need to use them are likely to be less expert; the very same methods that many computer scientists find difficult. The need for practical formal methods is clear.2
Abowd, G., and Dix, A., 1992. Giving undo attention. Interacting with Computers, 4(3), pp. 317–42.
Balestrini, M., Creus, J., Hassan, C., King, C., Marshall, P., and Rogers, Y., 2017. A City in Common: A Framework to Orchestrate Large-scale Citizen Engagement around Urban Issues. Proc. CHI 2017, ACM.
Bass, L., Little, R., Pellegrino, R., Reed, S., Seacord, R., Sheppard, S., and Szezur, M.R., 1991. The ARCH model: Seeheim Revisited, User Interface Developers’ Workshop, April 26, Version 1.0.
Bibb, R., Eggbeer, D., and Paterson, A., 2015. Medical Modelling: The Application of Advanced Design and Rapid Prototyping Techniques in Medicine. 2nd ed. Kidlington: Elsevier.
Bolton, M., and Bass, E., 2017. Enhanced Operator Function Model (EOFM): A Task Analytic Modeling Formalism for Including Human Behavior in the Verification of Complex Systems. In: B. Weyers, J. Bowen, A. Dix, P. Palanque, eds. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer. pp. 343–77.
Booch, G., Rumbaugh, J., and Jacobson, I., 1999. The Unified Modeling Language User Guide. London: Addison Wesley.
Bowen, J., and Reeves, S., 2017. Combining Models for Interactive System Modelling. In: B. Weyers, J. Bowen, A. Dix, P. Palanque, eds. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer. pp. 161–82.
British Standards Institution, 1987. BS 4058:1987, ISO 5807:1985: Specification for data processing flow chart symbols, rules and conventions. Milton Keynes: BSI.
Buxton, W., 1990. A three-state model of graphical input. In: Proceedings of Human–Computer Interaction—INTERACT ’90. Amsterdam: Elsevier, pp. 449–56.
Coutaz, J., 1987. PAC, an object-oriented model for dialogue design. In: H-J. Bullinger and B. Shackel, eds. Human Computer Interaction INTERACT ‘87, pp. 431–6.
Coutaz, J., 2010. User Interface Plasticity: Model Driven Engineering to the Limit! In Engineering Interactive Computing Systems. Berlin, Germany, 19–23 June 2010.
Dix, A., 1991. Formal methods for interactive systems. New York: Academic Press. Available through: http://www.hiraeth.com/books/formal/.
Dix, A., 1995. Dynamic pointers and threads. Collaborative Computing, 1(3):191–216. Available through: http://alandix.com/academic/papers/dynamic-pntrs-95/.
Dix, A., 2002. Formal Methods in HCI: a Success Story—why it works and how to reproduce it. Available through: http://alandix.com/academic/papers/formal-2002/.
Dix, A., 2007. Designing for appropriation. In: D. Ramduny-Ellis and D. Rachovides, eds. HCI 2007…but not as we know it, Volume 2: People and Computers XXI, The 21st British HCI Group Annual Conference. Lancaster, UK, 3–7 September 2007. London: BCS. Available from: http://www.bcs.org/server.php?show=ConWebDoc.13347.
Dix, A., 2016. Human computer interaction, foundations and new paradigms, Journal of Visual Languages & Computing, in press. doi: 10.1016/j.jvlc.2016.04.001.
Dix, A., and Abowd, G., 1996. Modelling status and event behaviour of interactive systems. Softw Eng J 11(6), pp. 334–46, http://www.hcibook.com/alan/papers/SEJ96-s+e/
Dix, A., and Ghazali, M., 2017. Physigrams: Modelling Physical Device Characteristics Interaction. In: B. Weyers, J. Bowen, A. Dix, P. Palanque, (p.210) eds. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer, pp. 247–72.
Dix, A., Ghazali, M., Gill, S., Hare, J., and Ramduny-Ellis, S., 2009. Physigrams: Modelling Devices for Natural Interaction. Formal Aspects of Computing, 21(6), pp. 613–41.
Dix, A., and Runciman, C., 1985. Abstract models of interactive systems. In: P. Johnson andS. Cook, eds. People and Computers: Designing the Interface. Cambridge: Cambridge University Press. pp. 13–22. Available through: http://www.alandix.com/academic/papers/PIE85/.
ECMA International, 1966. Standard: ECMA 4, Flow Charts. 2nd ed. Geneva: European Association for Standardizing Information and Communication Systems.
Eslambolchilar, P., 2006. Making sense of interaction using a model-based approach. PhD. Hamilton Institute, National University of Ireland.
Eveleth, R., 2015. DIY prosthetics: the extreme athlete who built a new knee. Mosaic, 19 May 2015, [online] Available at: https://mosaicscience.com/story/extreme-prosthetic-knee.
Freitas, L., Woodcock, J., and Zhang, Y., 2009. Verifying the CICS File Control API with Z/Eves: An experiment in the verified software repository. Science of Computer Programming, 74(4), pp. 197–218. http://dx.doi.org/10.1016/j.scico.2008.09.012
Gaver, W., 1991. Technology affordances. In: CHI ’91: Proceedings of the SIGCHI conference on Human factors in computing systems. New York, NY: ACM Press, pp. 79–84.
Gibson, J., 1979. The Ecological Approach to Visual Perception. New York, NY: Houghton Mifflin.
Gram, C., and Cockton, G., eds., 1996. Design principles for interactive software. London: Chapman & Hall.
Green, T., and Petri, M., 1996. Usability analysis of visual programming environments: a ‘cognitive dimensions’ framework. Journal of Visual Languages and Computing, 7, pp. 131–74.
Harel, D., 1987. Statecharts: A Visual Formalism for Complex Systems. Science of Computer Programming, 8(3), pp. 231–74.
Harrison, M., Masci, P., Creissac Campos, J., and Curzon, P., 2017. The Specification and Analysis of Use Properties of a Nuclear Control System. In: B. Weyers, J. Bowen, A. Dix, P. Palanque, eds. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer. pp. 379–403.
Kieras, D., and Polson, P., 1985. An approach to the formal analysis of user complexity. International Journal of Man-Machine Studies, 22, pp. 365–94.
Knuth, D., 1992. Literate Programming. Stanford, CA: Stanford University Center for the Study of Language and Information.
Manca, M., Paternò, F., and Santoro, C., 2017. A Public Tool Suite for Modelling Interactive Applications. In: B. Weyers, J. Bowen, A. Dix, P. Palanque, eds. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer. pp. 505–28.
Mancini, R., 1997. Modelling Interactive Computing by Exploiting the Undo. PhD. Università degli Studi di Roma ‘La Sapienza’. Available through: http://www.hcibook.net/people/Roberta/.
Masci, P., Zhang, Y., Jones, P., Curzon, P., and Thimbleby, H., 2014. Formal Verification of Medical Device User Interfaces Using PVS. In: Proceedings of the 17th International Conference on Fundamental Approaches to Software Engineering, Volume 8411. New York, NY: Springer-Verlag. pp. 200–14. Available through: http://dl.acm.org/citation.cfm?id=2731750.
Massink, M., Duke, D., and Smith, S., 1999. Towards hybrid interface specification for virtual environments. In: DSV-IS 1999 Design, Specification and Verification of Interactive Systems. Berlin: Springer. pp. 30–51.
Meixner, G., Calvary, G., and Coutaz, J., 2014. Introduction to Model-Based User Interfaces. W3C Working Group Note, 7 January 2014. Available through: http://www.w3.org/TR/mbui-intro/.
Meixner, G., Paternò, F., and Vanderdonckt, J., 2011. Past, Present, and Future of Model-Based User Interface Development. i-com, 10(3), pp. 2–11.
Navarre, D., Palanque, P. A., Ladry, J-F., and Barboni, E., 2009. ICOs: a model-based user interface description technique dedicated to interactive systems addressing usability, reliability and scalability. ACM Transactions on Computer-Human Interaction, 16(4), pp. 1–56.
Norman, D., 1999. Affordance, conventions, and design. Interactions, 6(3), pp. 38–43, New York, NY: ACM Press.
Palanque, P., and Paterno, F., eds., 1997. Formal Methods in Human-Computer Interaction. London: Springer-Verlag.
Paterno, F., 1999. Model-Based Design and Evaluation of Interactive Applications. 1st ed. London: Springer-Verlag.
Pfaff, G., and Hagen, P., eds., 1985. Seeheim Workshop on User Interface Management Systems. Berlin: Springer-Verlag.
Polak, W., 2002. Formal methods in practice. Science of Computer Programming, 42(1), pp. 75–85.
Reisner, P., 1981. Formal Grammar and Human Factors Design of an Interactive Graphics System. IEEE Transactions on Software Engineering, 7(2), pp. 229–40. DOI=10.1109/TSE.1981.234520
Shepherd, A., 1989. Analysis and training in information technology tasks. In D. Diaper and N. Standton, eds. The Handbook of Task Analysis for Human-Computer Interaction. Mahweh, NJ: Lawrence Erlbaum. pp. 15–55.
Smith, S., 2006. Exploring the specification of haptic interaction. In: DS-VIS 2006: Interactive systems: design, specification and verification. Dublin, Ireland, 26–28 July 2006. Springer, Berlin. Available through: https://link.springer.com/chapter/10.1007/978-3-540-69554-7_14.
Spivey, J., 1992. The Z Notation: a reference manual. 2nd edn. Upper Saddle River, NJ: Prentice Hall.
Sufrin, B., 1982. Formal specification of a display-oriented text editor. Science of Computer Programming, 1, pp. 157–202.
Thimbleby, H., 2007. Using the Fitts law with state transition systems to find optimal task timings. In: FMIS2007: Proceedings of the Second International Workshop on Formal Methods for (p.211) Interactive Systems. Limerick, Ireland, 4 September 2007. Available through: http://www.dcs.qmul.ac.uk/research/imc/hum/fmis2007/preproceedings/FMIS2007preproceedings.pdf.
Thimbleby, H., and Gow, J., 2008. Applying Graph Theory to Interaction Design. In: J. Gulliksen, M. B. Harning, P. Palanque, G. C. van der Veer, and J. Wesson, eds. Engineering Interactive Systems. Lecture Notes in Computer Science, Volume 4940. Berlin: Springer.
Thimbleby, H., and Harrison, M., eds., 1990. Formal Methods in Human-Computer Interaction. Cambridge: Cambridge University Press.
Van Mierlo, S., Van Mierlo, Y., Meyers, B., and Vangheluwe, H., 2017. Domain-Specific Modelling for Human–Computer Interaction. In: B. Weyers, J. Bowen, A. Dix, P. Palanque, eds. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer. pp. 435–63.
Wensveen, S., Djajadiningrat, J., and Overbeeke, C., 2004. Interaction frogger: a design framework to couple action and function. In: DIS ’04: Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques. Cambridge, MA, 2–4 August 2004.
Weyers, B., 2017. Visual and Formal Modeling of Modularized and Executable User Interface Models. In: B. Weyers, J. Bowen, A. Dix, P. Palanque, eds. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer. pp. 125–60.
Weyers, B., Bowen, J., Dix, A., and Palanque, P., eds., 2017. The Handbook of Formal Methods in Human-Computer Interaction. New York, NY: Springer.
Willans, J., and Harrison, M., 2000. Verifying the behaviour of virtual world objects. In: Proceedings of DSV-IS - Interactive Systems: Design, Specification, and Verification. Limerick, Ireland, June 5–6, 2000. Berlin: Springer.
Wüthrich, C., 1999. An analysis and model of 3D interaction methods and devices for virtual reality. In: Proceedings of DSV-IS - Interactive Systems: Design, Specification, and Verification. Braga, Portugal, 2–4 June 1999. Berlin: Springer.
Zhou, W., Reisinger, J., Peer, A., and Hirche, S., 2014. Interaction-Based Dynamic Measurement of Haptic Characteristics of Control Elements. In: Auvray, M., and Duriez, C., eds. Haptics: Neuroscience, Devices, Modeling, and Applications: 9th International Conference, EuroHaptics 2014, Proceedings, Part I, pp. 177–84. Versailles, France, June 24–26, 2014. Berlin: Springer. (p.212)