Multimodal interaction

From Wikipedia, the free encyclopedia

Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.

Introduction[]

Multimodal human-computer interaction refers to the "interaction with the virtual and physical environment through natural modes of communication",[1] This implies that multimodal interaction enables a more free and natural communication, interfacing users with automated systems in both input and output.[2] Specifically, multimodal systems can offer a flexible, efficient and usable environment allowing users to interact through input modalities, such as speech, handwriting, hand gesture and gaze, and to receive information by the system through output modalities, such as speech synthesis, smart graphics and other modalities, opportunely combined. Then a multimodal system has to recognize the inputs from the different modalities combining them according to temporal and contextual constraints[3] in order to allow their interpretation. This process is known as multimodal fusion, and it is the object of several research works from nineties to now.[4][5][6][7][8][9][10][11] The fused inputs are interpreted by the system. Naturalness and flexibility can produce more than one interpretation for each different modality (channel) and for their simultaneous use, and they consequently can produce multimodal ambiguity[12] generally due to imprecision, noises or other similar factors. For solving ambiguities, several methods have been proposed.[13][14][15][16][17][18] Finally the system returns to the user outputs through the various modal channels (disaggregated) arranged according to a consistent feedback (fission).[19] The pervasive use of mobile devices, sensors and web technologies can offer adequate computational resources to manage the complexity implied by the multimodal interaction. "Using cloud for involving shared computational resources in managing the complexity of multimodal interaction represents an opportunity. In fact, cloud computing allows delivering shared scalable, configurable computing resources that can be dynamically and automatically provisioned and released".[20]

Multimodal input[]

Two major groups of multimodal interfaces have merged, one concerned in alternate input methods and the other in combined input/output. The first group of interfaces combined various user input modes beyond the traditional keyboard and mouse input/output, such as speech, pen, touch, manual gestures,[21] gaze and head and body movements.[22] The most common such interface combines a visual modality (e.g. a display, keyboard, and mouse) with a voice modality (speech recognition for input, speech synthesis and recorded audio for output). However other modalities, such as pen-based input or haptic input/output may be used. Multimodal user interfaces are a research area in human-computer interaction (HCI).

The advantage of multiple input modalities is increased usability: the weaknesses of one modality are offset by the strengths of another. On a mobile device with a small visual interface and keypad, a word may be quite difficult to type but very easy to say (e.g. Poughkeepsie). Consider how you would access and search through digital media catalogs from these same devices or set top boxes. And in one real-world example, patient information in an operating room environment is accessed verbally by members of the surgical team to maintain an antiseptic environment, and presented in near realtime aurally and visually to maximize comprehension.

Multimodal input user interfaces have implications for accessibility.[23] A well-designed multimodal application can be used by people with a wide variety of impairments. Visually impaired users rely on the voice modality with some keypad input. Hearing-impaired users rely on the visual modality with some speech input. Other users will be "situationally impaired" (e.g. wearing gloves in a very noisy environment, driving, or needing to enter a credit card number in a public place) and will simply use the appropriate modalities as desired. On the other hand, a multimodal application that requires users to be able to operate all modalities is very poorly designed.

The most common form of input multimodality in the market makes use of the XHTML+Voice (aka X+V) Web markup language, an open specification developed by IBM, Motorola, and Opera Software. X+V is currently under consideration by the W3C and combines several W3C Recommendations including XHTML for visual markup, VoiceXML for voice markup, and XML Events, a standard for integrating XML languages. Multimodal browsers supporting X+V include IBM WebSphere Everyplace Multimodal Environment, Opera for Embedded Linux and Windows, and NetFront for Windows Mobile. To develop multimodal applications, software developers may use a software development kit, such as IBM WebSphere Multimodal Toolkit, based on the open source Eclipse framework, which includes an X+V debugger, editor, and simulator.[citation needed]

Multimodal sentiment analysis[]

Multimodal sentiment analysis is a new dimension[peacock term] of the traditional text-based sentiment analysis, which goes beyond the analysis of texts, and includes other modalities such as audio and visual data.[24] It can be bimodal, which includes different combinations of two modalities, or trimodal, which incorporates three modalities.[25] With the extensive amount of social media data available online in different forms such as videos and images, the conventional text-based sentiment analysis has evolved into more complex models of multimodal sentiment analysis,[26] which can be applied in the development of virtual assistants,[27] analysis of YouTube movie reviews,[28] analysis of news videos,[29] and emotion recognition (sometimes known as emotion detection) such as depression monitoring,[30] among others.

Similar to the traditional sentiment analysis, one of the most basic task in multimodal sentiment analysis is sentiment classification, which classifies different sentiments into categories such as positive, negative, or neutral.[31] The complexity of analyzing text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion.[26] The performance of these fusion techniques and the classification algorithms applied, are influenced by the type of textual, audio, and visual features employed in the analysis.[32]

Multimodal output[]

The second group of multimodal systems presents users with multimedia displays and multimodal output, primarily in the form of visual and auditory cues. Interface designers have also started to make use of other modalities, such as touch and olfaction. Proposed benefits of multimodal output system include synergy and redundancy. The information that is presented via several modalities is merged and refers to various aspects of the same process. The use of several modalities for processing exactly the same information provides an increased bandwidth of information transfer .[33][34][35] Currently, multimodal output is used mainly for improving the mapping between communication medium and content and to support attention management in data-rich environment where operators face considerable visual attention demands.[36]

An important step in multimodal interface design is the creation of natural mappings between modalities and the information and tasks. The auditory channel differs from vision in several aspects. It is omnidirectional, transient and is always reserved.[36] Speech output, one form of auditory information, received considerable attention. Several guidelines have been developed for the use of speech. Michaelis and Wiggins (1982) suggested that speech output should be used for simple short messages that will not be referred to later. It was also recommended that speech should be generated in time and require an immediate response.

The sense of touch was first utilized as a medium for communication in the late 1950s.[37] It is not only a promising but also a unique communication channel. In contrast to vision and hearing, the two traditional senses employed in HCI, the sense of touch is proximal: it senses objects that are in contact with the body, and it is bidirectional in that it supports both perception and acting on the environment.

Examples of auditory feedback include auditory icons in computer operating systems indicating users' actions (e.g. deleting a file, open a folder, error), speech output for presenting navigational guidance in vehicles, and speech output for warning pilots on modern airplane cockpits. Examples of tactile signals include vibrations of the turn-signal lever to warn drivers of a car in their blind spot, the vibration of auto seat as a warning to drivers, and the stick shaker on modern aircraft alerting pilots to an impending stall.[36]

Invisible interface spaces became available using sensor technology. Infrared, ultrasound and cameras are all now commonly used.[38] Transparency of interfacing with content is enhanced providing an immediate and direct link via meaningful mapping is in place, thus the user has direct and immediate feedback to input and content response becomes interface affordance (Gibson 1979).

Multimodal fusion[]

The process of integrating information from various input modalities and combining them into a complete command is referred as multimodal fusion.[5] In literature, three main approaches to the fusion process have been proposed, according to the main architectural levels (recognition and decision) at which the fusion of the input signals can be performed: recognition-based,[9][10][39] decision-based,[7][8][11][40][41][42][43] and hybrid multi-level fusion.[4][6][44][45][46][47][48][49]

The recognition-based fusion (also known as early fusion) consists in merging the outcomes of each modal recognizer by using integration mechanisms, such as, for example, statistical integration techniques, agent theory, hidden Markov models, artificial neural networks, etc. Examples of recognition-based fusion strategies are action frame,[39] input vectors[9] and slots.[10]

The decision-based fusion (also known as late fusion) merges the semantic information that are extracted by using specific dialogue-driven fusion procedures to yield the complete interpretation. Examples of decision-based fusion strategies are typed feature structures,[40][45] melting pots,[42][43] semantic frames,[7][11] and time-stamped lattices.[8]

The potential applications for multimodal fusion include learning environments, consumer relations, security/surveillance, computer animation, etc. Individually, modes are easily defined, but difficulty arises in having technology consider them a combined fusion.[50] It's difficult for the algorithms to factor in dimensionality; there exist variables outside of current computation abilities. For example, semantic meaning: two sentences could have the same lexical meaning but different emotional information.[50]

In the hybrid multi-level fusion, the integration of input modalities is distributed among the recognition and decision levels. The hybrid multi-level fusion includes the following three methodologies: finite-state transducers,[45] multimodal grammars[6][44][46][47][48][49][51] and dialogue moves.[52]

Ambiguity[]

User's actions or commands produce multimodal inputs (multimodal message[3]), which have to be interpreted by the system. The multimodal message is the medium that enables communication between users and multimodal systems. It is obtained by merging information that are conveyed via several modalities by considering the different types of cooperation between several modalities,[53] the time relationships[54] among the involved modalities and the relationships between chunks of information connected with these modalities.[55]

The natural mapping between the multimodal input, which is provided by several interaction modalities (visual and auditory channel and sense of touch), and information and tasks imply to manage the typical problems of human-human communication, such as ambiguity. An ambiguity arises when more than one interpretation of input is possible. A multimodal ambiguity[12] arises both, if an element, which is provided by one modality, has more than one interpretation (i.e. ambiguities are propagated at the multimodal level), and/or if elements, connected with each modality, are univocally interpreted, but information referred to different modalities are incoherent at the syntactic or the semantic level (i.e. a multimodal sentence having different meanings or different syntactic structure).

In "The Management of Ambiguities",[14] the methods for solving ambiguities and for providing the correct interpretation of the user's input are organized in three main classes: prevention, a-posterior resolution and approximation resolution methods.[13][15]

Prevention methods impose users to follow predefined interaction behaviour according to a set of transitions between different allowed states of the interaction process. Example of prevention methods are: procedural method,[56] reduction of the expressive power of the language grammar,[57] improvement of the expressive power of the language grammar.[58]

The a-posterior resolution of ambiguities uses mediation approach.[16] Examples of mediation techniques are: repetition, e.g. repetition by modality,[16] granularity of repair[59] and undo,[17] and choice.[18]

The approximation resolution methods do not require any user involvement in the disambiguation process. They can all require the use of some theories, such as fuzzy logic, Markov random field, Bayesian networks and hidden Markov models.[13][15]

See also[]

References[]

  1. ^ Bourguet, M.L. (2003). "Designing and Prototyping Multimodal Commands". Proceedings of Human-Computer Interaction (INTERACT'03), pp. 717-720.
  2. ^ Stivers, T., Sidnell, J. Introduction: Multimodal interaction. Semiotica, 156(1/4), pp. 1-20. 2005.
  3. ^ Jump up to: a b Caschera M. C., Ferri F., Grifoni P. (2007). "Multimodal interaction systems: information and time features". International Journal of Web and Grid Services (IJWGS), Vol. 3 - Issue 1, pp 82-99.
  4. ^ Jump up to: a b D'Ulizia, A., Ferri, F. and Grifoni, P. (2010). "Generating Multimodal Grammars for Multimodal Dialogue Processing". IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, Vol 40, no 6, pp. 1130 – 1145.
  5. ^ Jump up to: a b D'Ulizia , A. (2009). "Exploring Multimodal Input Fusion Strategies". In: Grifoni P (ed) Handbook of Research on Multimodal Human Computer Interaction and Pervasive Services: Evolutionary Techniques for Improving Accessibility. IGI Publishing, pp. 34-57.
  6. ^ Jump up to: a b c Sun, Y., Shi, Y., Chen, F. and Chung , V.(2007). "An Efficient Multimodal Language Processor for Parallel Input Strings in Multimodal Input Fusion," in Proc. of the international Conference on Semantic Computing, pp. 389-396.
  7. ^ Jump up to: a b c Russ, G., Sallans, B., Hareter, H. (2005). "Semantic Based Information Fusion in a Multimodal Interface". International Conference on Human-Computer Interaction (HCI'05), Las Vegas, Nevada, USA, 20–23 June, pp 94-100.
  8. ^ Jump up to: a b c Corradini, A., Mehta M., Bernsen, N.O., Martin, J.-C. (2003). "Multimodal Input Fusion in Human-Computer Interaction on the Example of the on-going NICE Project". In Proceedings of the NATO-ASI conference on Data Fusion for Situation Monitoring, Incident Detection, Alert and Response Management, Yerevan, Armenia.
  9. ^ Jump up to: a b c Pavlovic, V.I., Berry, G.A., Huang, T.S. (1997). "Integration of audio/visual information for use in human-computer intelligent interaction". Proceedings of the 1997 International Conference on Image Processing (ICIP '97), Volume 1, pp. 121-124.
  10. ^ Jump up to: a b c Andre, M., Popescu, V.G., Shaikh, A., Medl, A., Marsic, I., Kulikowski, C., Flanagan J.L. (1998). "Integration of Speech and Gesture for Multimodal Human-Computer Interaction". In Second International Conference on Cooperative Multimodal Communication. 28–30 January, Tilburg, The Netherlands.
  11. ^ Jump up to: a b c Vo, M.T., Wood, C. (1996). "Building an application framework for speech and pen input integration in multimodal learning interfaces". In Proceedings of the Acoustics, Speech, and Signal Processing (ICASSP'96), May 7–10, IEEE Computer Society, Volume 06, pp. 3545-3548.
  12. ^ Jump up to: a b Caschera, M.C. , Ferri, F. , Grifoni, P. (2013). "From Modal to Multimodal Ambiguities: a Classification Approach", Journal of Next Generation Information Technology (JNIT), Vol. 4, No. 5, pp. 87 -109.
  13. ^ Jump up to: a b c Caschera, M.C. , Ferri, F. , Grifoni, P. (2013). InteSe: An Integrated Model for Resolving Ambiguities in Multimodal Sentences". IEEE Transactions on Systems, Man, and Cybernetics: Systems, Volume: 43, Issue: 4, pp. 911 - 931.18. Spilker, J., Klarner, M., Görz, G. (2000). "Processing Self Corrections in a speech to speech system". COLING 2000. pp. 1116-1120.
  14. ^ Jump up to: a b Caschera M.C., Ferri F., Grifoni P., (2007). "The Management of ambiguities". In Visual Languages for Interactive Computing: Definitions and Formalizations. IGI Publishing. pp.129-140.
  15. ^ Jump up to: a b c J. Chai, P. Hong, and M. X. Zhou, (2004 )."A probabilistic approach to reference resolution in multimodal user interface" in Proc. 9th Int. Conf. Intell. User Interf., Madeira, Portugal, Jan. 2004, pp. 70–77.
  16. ^ Jump up to: a b c Dey, A. K. Mankoff , J., (2005). "Designing mediation for context-aware applications". ACM Trans. Comput.-Hum. Interact. 12(1), pp. 53-80.
  17. ^ Jump up to: a b Spilker, J., Klarner, M., Görz, G. (2000). "Processing Self Corrections in a speech to speech system". COLING 2000. pp. 1116-1120.
  18. ^ Jump up to: a b Mankoff, J., Hudson, S.E., Abowd, G.D. (2000). "Providing integrated toolkit-level support for ambiguity in recognition-based interfaces". Proceedings of ACM CHI'00 Conference on Human Factors in Computing Systems. pp. 368 – 375.
  19. ^ Grifoni P (2009) Multimodal fission. In: Multimodal human computer interaction and pervasive services. IGI Global, pp 103–120
  20. ^ Patrizia Grifoni, Fernando Ferri, Maria Chiara Caschera, Arianna D'Ulizia, Mauro Mazzei, "MIS: Multimodal Interaction Services in a cloud perspective", JNIT: Journal of Next Generation Information Technology, Vol. 5, No. 4, pp. 01 ~ 10, 2014
  21. ^ Kettebekov, Sanshzar, and Rajeev Sharma (2001). "Toward Natural Gesture/Speech Control of a Large Display." ProceedingsEHCI '01 Proceedings of the 8th IFIP International Conference on Engineering for Human-Computer Interaction Pages 221-234
  22. ^ Marius Vassiliou, V. Sundareswaran, S. Chen, R. Behringer, C. Tam, M. Chan, P. Bangayan, and J. McGee (2000), "Integrated Multimodal Human-Computer Interface and Augmented Reality for Interactive Display Applications," in Darrel G. Hopper (ed.) Cockpit Displays VII: Displays for Defense Applications (Proc. SPIE . 4022), 106-115. ISBN 0-8194-3648-8
  23. ^ Vitense, H.S.; Jacko, J.A.; Emery, V.K. (2002). "Multimodal feedback: establishing a performance baseline for improved access by individuals with visual impairments". ACM Conf. on Assistive Technologies.
  24. ^ Soleymani, Mohammad; Garcia, David; Jou, Brendan; Schuller, Björn; Chang, Shih-Fu; Pantic, Maja (September 2017). "A survey of multimodal sentiment analysis". Image and Vision Computing. 65: 3–14. doi:10.1016/j.imavis.2017.08.003.
  25. ^ Karray, Fakhreddine; Milad, Alemzadeh; Saleh, Jamil Abou; Mo Nours, Arab (2008). "Human-Computer Interaction: Overview on State of the Art" (PDF). International Journal on Smart Sensing and Intelligent Systems. 1: 137–159. doi:10.21307/ijssis-2017-283.
  26. ^ Jump up to: a b Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion. 37: 98–125. doi:10.1016/j.inffus.2017.02.003. hdl:1893/25490.
  27. ^ "Google AI to make phone calls for you". BBC News. 8 May 2018. Retrieved 12 June 2018.
  28. ^ Wollmer, Martin; Weninger, Felix; Knaup, Tobias; Schuller, Bjorn; Sun, Congkai; Sagae, Kenji; Morency, Louis-Philippe (May 2013). "YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context". IEEE Intelligent Systems. 28 (3): 46–53. doi:10.1109/MIS.2013.34. S2CID 12789201.
  29. ^ Pereira, Moisés H. R.; Pádua, Flávio L. C.; Pereira, Adriano C. M.; Benevenuto, Fabrício; Dalip, Daniel H. (9 April 2016). "Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos". arXiv:1604.02612 [cs.CL].
  30. ^ Zucco, Chiara; Calabrese, Barbara; Cannataro, Mario (November 2017). Sentiment analysis and affective computing for depression monitoring. 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE. pp. 1988–1995. doi:10.1109/bibm.2017.8217966. ISBN 978-1-5090-3050-7. S2CID 24408937.
  31. ^ Pang, Bo; Lee, Lillian (2008). Opinion mining and sentiment analysis. Hanover, MA: Now Publishers. ISBN 978-1601981509.
  32. ^ Sun, Shiliang; Luo, Chen; Chen, Junyu (July 2017). "A review of natural language processing techniques for opinion mining systems". Information Fusion. 36: 10–25. doi:10.1016/j.inffus.2016.10.004.
  33. ^ Oviatt, S. (2002), "Multimodal interfaces", in Jacko, J.; Sears, A (eds.), The Human-Computer Interaction Handbook (PDF), Lawrence Erlbaum
  34. ^ Bauckhage, C.; Fritsch, J.; Rohlfing, K.J.; Wachsmuth, S.; Sagerer, G. (2002). "Evaluating integrated speech-and image understanding". Int. Conf. on Multimodal Interfaces. doi:10.1109/ICMI.2002.1166961.
  35. ^ Ismail, N.A.; O'Brien, E.A. (2008). "Enabling Multimodal Interaction in Web-Based Personal Digital Photo Browsing" (PDF). Int. Conf. on Computer and Communication Engineering. Archived from the original (PDF) on 2011-07-18. Retrieved 2010-03-03.
  36. ^ Jump up to: a b c Sarter, N.B. (2006). "Multimodal information presentation: Design guidance and research challenges". International Journal of Industrial Ergonomics. 36 (5): 439–445. doi:10.1016/j.ergon.2006.01.007.
  37. ^ Geldar, F.A. (1957). "Adventures in tactile literacy". American Psychologist. 12 (3): 115–124. doi:10.1037/h0040416.
  38. ^ Brooks, A.; Petersson, E. (2007). "SoundScapes: non-formal learning potentials from interactive VEs". SIGGRAPH. doi:10.1145/1282040.1282059.
  39. ^ Jump up to: a b Vo, M.T. (1998). "A framework and Toolkit for the Construction of Multimodal Learning Interfaces", PhD. Thesis, Carnegie Mellon University, Pittsburgh, USA.
  40. ^ Jump up to: a b Cohen, P.R.; Johnston, M.; McGee, D.; Oviatt, S.L.; Pittman, J.; Smith, I.A.; Chen, L.; Clow, J. (1997). "Quickset: Multimodal interaction for distributed applications", ACM Multimedia, pp. 31-40.
  41. ^ Johnston, M. (1998). "Unification-based Multimodal Parsing". Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL '98), August 10–14, Université de Montréal, Montreal, Quebec, Canada. pp. 624-630.
  42. ^ Jump up to: a b Nigay, L.; Coutaz, J. (1995). "A generic platform for addressing the multimodal challenge". Proceedings of the Conference on Human Factors in Computing Systems, ACM Press.
  43. ^ Jump up to: a b Bouchet, J.; Nigay, L.; Ganille, T. (2004). "Icare software components for rapidly developing multimodal interfaces". ICMI '04: Proceedings of the 6th international conference on Multimodal interfaces (New York, NY, USA), ACM, pp. 251-258.
  44. ^ Jump up to: a b D'Ulizia, A.; Ferri, F.; Grifoni P. (2007). "A Hybrid Grammar-Based Approach to Multimodal Languages Specification", OTM 2007 Workshop Proceedings, 25–30 November 2007, Vilamoura, Portugal, Springer-Verlag, Lecture Notes in Computer Science 4805, pp. 367-376.
  45. ^ Jump up to: a b c Johnston, M.; Bangalore, S. (2000). "Finite-state Multimodal Parsing and Understanding", In Proceedings of the International Conference on Computational Linguistics, Saarbruecken, Germany.
  46. ^ Jump up to: a b Sun, Y.; Chen, F.; Shi, Y.D.; Chung, V. (2006). "A novel method for multi-sensory data fusion in multimodal human computer interaction". In Proceedings of the 20th conference of the computer-human interaction special interest group (CHISIG) of Australia on Computer-human interaction: design: activities, artefacts and environments, Sydney, Australia, pp. 401-404
  47. ^ Jump up to: a b Shimazu, H.; Takashima, Y. (1995). "Multimodal Definite Clause Grammar," Systems and Computers in Japan, vol. 26, no 3, pp. 93-102.
  48. ^ Jump up to: a b Johnston, M.; Bangalore, S. (2005). "Finite-state multimodal integration and understanding," Nat. Lang. Eng, Vol. 11, no. 2, pp. 159-187.
  49. ^ Jump up to: a b Reitter, D.; Panttaja, E. M.; Cummins, F. (2004). "UI on the fly: Generating a multimodal user interface," in Proc. of HLT-NAACL-2004, Boston, Massachusetts, USA.
  50. ^ Jump up to: a b Guan, Ling. "Methods and Techniques for MultiModal Information Fusion" (PDF). Circuits & Systems Society.
  51. ^ D'Ulizia, A.; Ferri, F.; Grifoni P. (2011). "A Learning Algorithm for Multimodal Grammar Inference", IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, Vol. 41 (6), pp. 1495 - 1510.
  52. ^ Pérez, G.; Amores, G.; Manchón, P. (2005). "Two strategies for multimodal fusion". In Proceedings of Multimodal Interaction for the Visualization and Exploration of Scientific Data, Trento, Italy, 26–32.
  53. ^ Martin, J.C. (1997). "Toward intelligent cooperation between modalities: the example of a system enabling multimodal interaction with a map", Proceedings of International Joint Conference on Artificial Intelligence (IJCAI'97) Workshop on 'Intelligent Multimodal Systems', Nagoya, Japan
  54. ^ Allen, J.F.; Ferguson, G. (1994). "Actions and events in interval temporal logic", Journal of Logic and Computation, Vol. 4, No. 5, pp.531–579
  55. ^ Bellik, Y. (2001). "Technical requirements for a successful multimodal interaction", International Workshop on Information Presentation and Natural Multimodal Dialogue, Verona, Italy, 14–15 December
  56. ^ Lee, Y.C.; Chin, F. (1995). "An Iconic Query Language for Topological Relationship in GIS". International Journal of geographical Information Systems 9(1). pp. 25-46
  57. ^ Calcinelli, D.; Mainguenaud, M. (1994). "Cigales, a visual language for geographic information system: the user interface". Journal of Visual Languages and Computing 5(2). pp. 113-132
  58. ^ Ferri, F.; Rafanelli, M. (2005). "GeoPQL: A Geographical Pictorial Query Language That Resolves Ambiguities in Query Interpretation". J. Data Semantics III. pp.50-80
  59. ^ Suhm, B., Myers, B. and Waibel, A. (1999). "Model-based and empirical evaluation of multimodal interactive error correction". In Proc. Of CHI'99, May, 1999, pp. 584-591

External links[]

Retrieved from ""