Human-Computer Interaction: Sensory Actuators and Semiotics

Classified in Design and Engineering

Written at on English with a size of 18.75 KB.

Can also be including labels of actuators
•– Light
•– Sound
•– Vibrators
•– Solenoids
•– Servos
•– Heat/cool pads
& of Sensors
•– Heartbeat
•– Temperature
•– Skin conductance
•– Pressure and bend sensors
•– Accelerometer
•– Microphone
•– Light sensor
•– Distance
•– Gaze (eye-tracker)
•– Buttons
•– Faders


•Galvanic Skin Response (conductivity)
-offers fast response, but only changes have meaning
•Webcam
-- facial expressions directly convey emotional states, but for mobile applications it would be difficult to mount a camera…
•Speech
-- intonation, rhythm, lexical stress, and other features in speech can be used effectively, but universal affect interpretation difficult as different people naturally have different rates of talking, etc. However, many forms of HCI do not require users to talk at all…
•Heart rate
-Only somewhat relates to affective state, also relates to simple measures of physical activity…


Answer: Head Related Transfer Function (HRTF) is defined as the ratio of the sound pressure spectrum measured at the eardrum to the sound pressure spectrum that would exist at the center of the head if the head were removed. HRTF describes how a given sound wave input (parameterized as frequency and source location) is filtered by the diffraction and reflection properties of the head, pinna, and torso, before the sound reaches the transduction machinery of the eardrum and inner ear.
(b) Describe how ITD and ILD relate to (a) above and explain how they work perceptually.
Answer: Sound localization refers to a listener's ability to identify the location or origin of a detected sound in direction and distance. It may also refer to the methods in acoustical engineering to simulate the placement of an auditory cue in a virtual 3D space (see binaural recording, wave field synthesis). The auditory system uses several cues for sound source localization, including time- and level-differences between ears, spectral information, timing analysis, correlation analysis, and pattern matching.
Answer: Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in intensity, spectral, and timing cues to allow us to localize sound sources. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, and the distance (for static sounds) or velocity (for moving sounds).
The azimuth of a sound is signaled by the difference in arrival times between the ears, by the relative amplitude of high-frequency sounds (the shadow effect), and by the asymmetrical spectral reflections from various parts of our bodies, including torso, shoulders, and pinnae.
The distance cues are the loss of amplitude, the loss of high frequencies, and the ratio of the direct signal to the reverberated signal.
Depending on where the source is located, our head acts as a barrier to change the timbre, intensity, and spectral qualities of the sound, helping the brain orient where the sound emanated from. These minute differences between the two ears are known as interaural cues.
Lower frequencies, with longer wavelengths, diffract the sound around the head forcing the brain to focus only on the phasing cues from the source.
Helmut Haas discovered that we can discern the sound source despite additional reflections at 10 decibels louder than the original wave front, using the earliest arriving wave front. This principle is known as the Haas effect, a specific version of the precedence effect. Haas measured down to even a 1 millisecond difference in timing between the original sound and reflected sound increased the spaciousness, allowing the brain to discern the true location of the original sound. The nervous system combines all early reflections into a single perceptual whole allowing the brain to process multiple different sounds at once. The nervous system will combine reflections that are within about 35 milliseconds of each other and that have a similar intensity.


Answer: Most mammals are adept at resolving the location of a sound source using interaural time differences and interaural level differences. However, no such time or level differences exist for sounds originating along the circumference of circular conical slices, where the cone's axis lies along the line between the two ears. It causes humans to be confused about precise sound source localization.
Consequently, sound waves originating at any point along a given circumference slant height will have ambiguous perceptual coordinates. That is to say, the listener will be incapable of determining whether the sound originated from the back, front, top, bottom or anywhere else along the circumference at the base of a cone at any given distance from the ear. Of course, the importance of these ambiguities are vanishingly small for sound sources very close to or very far away from the subject, but it is these intermediate distances that are most important in terms of fitness.
These ambiguities can be removed by moving/rotating/tilting the head, which can introduce a shift in both the amplitude and phase of sound waves arriving at each ear. This translates the vertical orientation of the interaural axis horizontally, thereby leveraging the mechanism of localization on the horizontal plane. Moreover, even with no alternation in the angle of the interaural axis (i.e. without tilting one's head) the hearing system can capitalize on interference patterns generated by pinnae, the torso, and even the temporary re-purposing of a hand as extension of the pinna (e.g., cupping one's hand around the ear). Ambiguities can be removed by moving/rotating/tilting the head.


(f) Explain how humans localize the sounds around them. What are the ILD and ITD and how are impulse responses used in the HRTF algorithm? Why is it important to use anechoic sound sources in conjunction with this algorithm?
Answer:
Inter-aural Level Difference (ILD)Inter-aural Time Difference (ITD)
The HRTF encodes ILD, ITD, and much more – by using typical Head-Related Impulse Response (HRIR) recordings of the ‘natural EQ effects’ in human hearing. Anechoic signals as stimulus are needed for a similar reason as above… using any signals with pre-existing positional cues to human perception would defeat the purpose of the HRTF.
(g) What is the difference between auralization and sonification? What happens to the volume of a sound when the distance between the listener and the sound source is cut in half? In what context might one wish to use sonification?
Answer:
Auralization is the simulation of acoustic spaces in software, which may for example be applied in areas such as predicting room acoustics for architects, interactive sound rendering in games, etc… Sonification does not simulate acoustics at all.
Volume doubles when distance is halved.
Sonification is the use of nonspeech audio to convey information. More specifically, sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation. One might wish to use sonification to understand scientific data or processes in a different manner than visually, which possibly be better in certain situations.


(a) What are LRA vibration motors (also known as linear vibrators).
What are ERM vibration motors (also known as pager motors).
Explain the commonalities and dissimilarities and where commonly used?
Answer:
As with Eccentric Rotating Mass vibration motors (ERMs), the vibrations created by Linear Resonant Actuators (LRAs) are based upon the movement of a mass that causes repeated displacement.
An ERM has an off-centre load, when it rotates the centripetal force causes the motor to move. The rotation is created by applying a current to the armature windings attached to the motor shaft. As these are inside a magnetic field created by the permanent magnets on the inside of the motor’s body, a force is created causing the shaft to rotate. To ensure the rotation continues in the same direction, the current in the windings is reversed. This is achieved by using static metal brushes at the motor terminals, which connect to a commutator that rotates with the shaft and windings. The different segments of the commutator connect to the brushes during rotation and the current is reversed, maintaining the direction of rotation.
In a similar method, LRAs use magnetic fields and electrical currents to create a force. One major difference is the voice coil (equivalent of the armature windings) remains stationary and the magnetic mass moves instead. The mass is also attached to a spring, which helps it return to the centre. Driving the magnetic mass up and down causes the displacement of the LRA and thereby the vibration force.
As the voice coil remains stationary and the direction of the force on the magnet must be switched, the direction of the current in the voice coil must also be switched. This means that LRAs require an AC drive signal to function correctly.
The LRA works in a similar way to how a loudspeaker creates music. In a loud speaker, the speaker cone is used to produce audio waves through displacement. However a loudspeaker is designed to operate over a range of frequencies, where as an LRA is tuned to its resonant frequency
LRAs have better haptic performance characteristics and are more efficient (at resonance) – commonly used in cell phones. They have a couple of advantages over ERMs. They vibrate at a fixed resonant frequency, which means that varying the amplitude of vibration doesn’t affect the vibration frequency (which does occur in ERM motors). With no internal brushes or commutator they are longer life and also have improved haptic response times. However, they are limited in vibration strength because they are limited in size.


(c) What is PWM signal?
Answer: Pulse Width Modulation (PWM) signals


The PWM signal has three separate components: What are they?

(1) Voltage (2) Frequency (3) Duty Cycle

What is a duty cycle?

How is it expressed?

Illustrate the difference in Duty Cycles as waveforms 0%; 25%; 50%; 75%; 100% over period = 1 / f. Include label High (V PWM) and Low (0V)

Answer:

The Duty Cycle represents the length of the On pulse compared to one period cycle. It is expressed as a percentage. To illustrate the difference in Duty Cycles, there are example waveforms below:

Adaptive interfaces use ‘learned knowledge’ (data gathered by the system about user’s interaction patterns and contexts) to improve the user experience. “Google Now” is a real-world example of this that runs on the Android platform - it learns from observing the behaviours of the user. For example, it may automatically displays updates about a sports team during a live game, based on your web browsing and search history (thereby ‘learning’ which is your favourite team). In some contexts this may be great, but are adaptive interfaces universally considered to be a good solution? What considerations must you as a designer grapple with, when choosing between adaptive and static interfaces?

Answer:

Adaptive user interfaces should not be considered a panacea for all problems. The designer should seriously take under consideration if the user really needs an adaptive system. The most common concern regarding the use of adaptive interfaces is the violation of standard usability principles. In fact, evidence exists that suggests that static interface designs sometimes promote superior performance than adaptive ones.

An important issue is how the interaction techniques might change to take the varying input and output hardware devices into account. A system might choose the appropriate interaction techniques taking into account the input and output capabilities of the devices and the user preferences. Many researchers today are focusing on context aware interfaces, recognition-based interfaces, intelligent and adaptive interfaces, and multimodal perceptual interfaces…

(a) What is a FSR?

Answer: Force Sensing Resistors (FSR) are a polymer thick film (PTF) device which exhibits a decrease in resistance with an increase in the force applied to the active surface. Its force sensitivity is optimized for use in human touch control of electronic devices. FSRs are not a load cell or strain gauge, though they have similar properties. FSRs are not suitable for precision measurements.

(a) What is Fitts’s Law?

Answer:

Short = Fitts' Law: The time to acquire a target is a function of the distance to and size of the target.

Long = Fitts's law (often cited as Fitts' law) is a descriptive model of human movement primarily used in human–computer interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. Fitts's law has been shown to apply under a variety of conditions, with many different limbs (hands, feet, the lower lip, head-mounted sights, eye gaze), manipulanda (input devices), physical environments (including underwater), and user populations (young, old, special educational needs, and drugged participants).


MT is the average time to complete the movement.

a and b are model parameters.

ID is the index of difficulty.

D is the distance from the starting point to the centre of the target.

W is the width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within ±W⁄2 of the target's centre.

Fitts's law has been extended to two-dimensional tasks in two different ways. For navigating e.g. hierarchical pull-down menus, the user must generate a trajectory with the pointing device that is constrained by the menu geometry; for this application the ?????????? was derived.

Question: What is ?????????

Answer: Accot-Zhai steering law

a)What do the acronyms EMG and EEG stand for? What is the equation to calculate human heart rate in Beats Per Minute? Write the code (in a known language) for calculating this, based on a simple sensor such as the Grove pulse sensor (it generates a single event for each heartbeat).

Answer: Electromyogram (muscle tension), Electroencephalogram (brain activity)

            heart rate equation:  BPM = 60 / R-R interval, where R-R is the time between each pulse.

(a) What is semiotics

Answer: the study of signs and symbols and their use or interpretation. The importance of signs and signification has been recognized throughout much of the history of philosophy, and in psychology as well. Plato and Aristotle both explored the relationship between signs and the world, and Augustine considered the nature of the sign within a conventional system. These theories have had a lasting effect in Western philosophy, especially through scholastic philosophy. (More recently, Umberto Eco, in his Semiotics and the Philosophy of Language, has argued that semiotic theories are implicit in the work of most, perhaps all, major thinkers.)

Semioticians classify signs or sign systems in relation to the way they are transmitted (see modality). This process of carrying meaning depends on the use of codes that may be the individual sounds or letters that humans use to form words, the body movements they make to show attitude or emotion, or even something as general as the clothes they wear. To coin a word to refer to a thing (see lexical words), the community must agree on a simple meaning (a denotative meaning) within their language, but that word can transmit that meaning only within the language's grammatical structures and codes (see syntax and semantics). Codes also represent the values of the culture, and are able to add new shades of connotation to every aspect of life.

(b) The three main branches of semiotics are: Semantics, Syntactics, and Pragmatics – give a short sentence description of each.

Answer:

Semantics – relation between signs and the things to which they refer; their denotata, or meaning

Syntactics – relations among signs in formal structures

Pragmatics – relation between signs and the effects they have on the people who use them

Entradas relacionadas: