[go: up one dir, main page]

AU2013100081A4 - Home Environment Automated Real Time (HEART) System - Google Patents

Home Environment Automated Real Time (HEART) System Download PDF

Info

Publication number
AU2013100081A4
AU2013100081A4 AU2013100081A AU2013100081A AU2013100081A4 AU 2013100081 A4 AU2013100081 A4 AU 2013100081A4 AU 2013100081 A AU2013100081 A AU 2013100081A AU 2013100081 A AU2013100081 A AU 2013100081A AU 2013100081 A4 AU2013100081 A4 AU 2013100081A4
Authority
AU
Australia
Prior art keywords
user
screen
security
data
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2013100081A
Inventor
Janstan Josef Ong Espinosa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AUTOMATE PROJECTS Pty Ltd
Original Assignee
AUTOMATE PROJECTS Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2012904648A external-priority patent/AU2012904648A0/en
Application filed by AUTOMATE PROJECTS Pty Ltd filed Critical AUTOMATE PROJECTS Pty Ltd
Priority to AU2013100081A priority Critical patent/AU2013100081A4/en
Application granted granted Critical
Publication of AU2013100081A4 publication Critical patent/AU2013100081A4/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Landscapes

  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Abstract A system for and a method of providing a home automation system and controller is disclosed which supports multiple number and multiple types of data communication with devices and subsystems within the establishment as well as systems external to the establishment. The system is based upon a central processor, such as a microprocessor based computer, and is connected to devices through a RF signal and/or signal over the powerlines to control various products and subsystems within an establishment (ie. Home, Commercial Building) such as lighting systems, security systems, various sensors, appliances, multiple external terminals, as well as to allow for the input commands by a variety of means such as tablet computers, smartphones, mobile phones, mobile computers, screens, voice recognition systems, gesture recognition systems, glass/surface input, custom switches and/or any device capable of providing input to a computer system. The system utilises number plate recognition to open and close selected entry points to an establishment. The system functions can be readily controlled by the user by utilising a device with a built in internet browser.

Description

Editorial Note 2013100081 There are only 41 pages of Description Home Environment Automation Real Time (HEART) SYSTEM Table of Contents B a ckg ro u n d ............................................................................................................................................. 2 Sum m ary and O bjects of the Invention .............................................................................................. 3 D etailed D escriptio n ................................................................................................................... 5 TO U CH G LA SS.............................................................................................. ...... . ........................ 8 VO ICE RECO G N ITION ........................................................................ ... ...... ....... . ....... ............ 9 MULTIPLE TYPES OF USER INPUT DEVICES .................................. ......... 11 INFORMATION RETRIEVAL ............................................. 15 M ULTIDIM ENSIONAL INTERACTION AND CONTROL .......... . ........... ................................... 16 INTERFACE TO A MULTI-ZONE SECURITY SYSTEM........ ............................................... 16 SPO KEN M ESSAG ES AS CU ES .................... .... .. .................................................. 17 THE SO FTW ARE SYSTEM .............................. .................................................................. 17 THE SUBSYSTEM INTERRUPT HANDLER .... . .. ...... ......................................................... 18 THE INTERNAL SCHEDULED INTERRUPT SERVER (ISIS) ...................................................... 18 TH E PO LLING LO O P ......... ................................ .......................................................................... 19 T H E TA SKS ............... .................... ........................................................................................... 20 THE DEV ICE DRIVERS. ................. ................................................................................... 36 O PERA T IO N .... . ................ .. ......................................................................................... 38 W hat is claim ed ......... ....... ................................................................................................. 42 Background The present invention relates to computer controlled systems for automating a plurality of diverse electrically controllable subsystems. More particularly, the present invention relates to a microprocessor-based electronic control system for use in the automation and control of electrical subsystems found in the home environment. Such systems may also be utilised in commercial applications. Sophisticated electronic control over complex systems has been limited primarily to industrial applications and commercial applications. Such electronic control systems usually require extensive and costly equipment, and a technically trained operator. However, as a consumer products and systems become more and more complex, untrained people in home environments have increasingly desirous of sophisticated electronic control systems for enabling an easy to use control over such home products and systems. Many such products and systems are also increasingly being used in commercial environments, such as conference rooms in which it is desirable to provide an easy to use control system for use by individuals who are technically challenged. Known home automation systems are generally built around a control box which is connected to existing household AC wiring and new networks to one or more modules distributed throughout the home. The appliances and/or lights to be controlled are in turn connected to the modules and may thus be controlled from the control box by the user. The main advantage of a network connected. The said system can be easily disrupted by outside environmental electrical disturbances, such as weather conditions, and is too costly to install new network cables. In addition, such systems allow control of only a relatively limited number of types of electrical appliances. They do not, however, allow for any sophisticated programming functions other that perhaps a time on and time off feature. Thus the said systems are of relatively limited utility for home automation purposes. More sophisticated home automation system designs are known, which are generally built around a programmable microprocessor connected to a local bus which accepts a limited number of input/output control cards. Such systems may allow the connection of one or two user control devices such as a keypad or a screen for inputting control commands to the home automation system. However, such systems have a predetermined limited number of how many devices and user interfaces the system can support. Generally speaking, in order to expand such systems, a second identical controller is required with its own programming for controlling its own connected devices and user interfaces. Although such an approach may be cost effective for small home automation systems, it is too limiting for more sophisticated automation tasks or for larger homes. The inventive home automation system disclosed in this application overcomes such limitations and drawbacks of the prior art systems by being designed as a master controller for intelligent subsystems. Although it can also control simple devices directly, the primary method of control of the present invention is by means of multiple communication channels which are compatible with or convertible to a wide variety of standard data communications protocols. Thus, the system disclosed in this application can be connected to virtually any type of electrically controlled device that may be presently found in a home or commercial building or can conceivably be connected to either the RF Signal, signal over powerline, local area networks or future home automation data buses. The system may also be connected to devices to be controlled directly by parallel and serial ports.
With the innovative expansion capabilities of the inventive system, simultaneous operation of multiple types of user devices can now be achieved. For example, the home automation system described herein may be connected to simple keyboards, serial data keypads, screens, voice recognition circuitry, hand-held remote controls, computer keyboards, mobile phones, tablets, desktop computers, glass or telephones. In fact, virtually any type of electronic subsystem may be connected, by means of an appropriate interface, to the present system. The present invention is also compatible with commercially available automation controllers. Thus, for example, when a control task requires an extensive number of inputs and outputs, the system can become the "master controller" for a wide variety of commercial or special purpose automation controllers. Such a capability is not available in any other known home automation system controller. Due to the innovative expansion capabilities discussed above, the present invention, while allowing simultaneous operation of multiple types of user devices and compatibility with commercial automation controllers, is also compatible with intelligent appliances and subsystems and with external information retrieval services. Consumer appliances used in the home are increasingly becoming more intelligent. The appliance manufacturers are increasingly incorporating connections for microprocessor-based intelligent remote control. The system controller of the present invention embodies a multiple data port capability which provides for its connection to an unlimited number of such intelligent appliances simultaneously. For example, the expandable home automation system may be connected to control or communicate with intelligent audio/video systems, heating/cooling systems, access control systems, security systems, telephone systems, appliances and lighting systems. Having access to multiple data ports also allows the system disclosed in this application to dedicate one or more data ports for connection to external information services or gateways to information services by means of a modem or other type of data connection. That allows the home automation system to become an information provider as well as a controller. Summary and Objects of the Invention In view of the foregoing, it should be apparent that there still exists a need in the art for a home automation control system which allows for the sophisticated control, including programming, of virtually any home subsystem or appliance in which electronic means are utilised by the user in a simple to understand and yet precise manner to accomplish the desired control or monitoring function. It is, therefore, a primary object of this invention to provide an expandable home automation control system for providing sophisticated control functions over complex subsystems found in a home which is characterized by intuitive and simple to use user interfaces. More particularly, it is an object of this invention to provide an expandable control system which provides for a simple, intuitive and easy to use control over complex subsystems utilised in commercial buildings. Still more particularly, it is an object of this invention to provide an expandable home automation control system which includes a multiple port, expandable controller.
Another object of the present invention is to provide an expandable home automation control system in which the user utilizes high resolution colour graphics screens, tablet, Mobile phone, computer and glass for instructing the system to perform its control functions. A further object of the present invention is to provide an expandable home automation control system in which voice recognition and gestures are utilised by the user to instruct the system to control subsystems present in the home. A further object of the present invention is to provide an expandable home automation system in which voice and gesture recognition may be used in concert with high resolution colour graphics displays in order to provide the user with an easy to use interface for instructing the system to perform its control or monitoring functions. Still another object of the present invention is to provide an expandable home automation system which incorporates multiple types of user devices which may be utilised simultaneously. A still further object of the present invention is to provide an expandable home automation control system which uses dynamic, object-oriented touch or cursor controlled displays for controlling and scheduling actions. Still more particularly, it is an object of this invention to provide an expandable home automation control system which integrates tailored information retrieval together with communication services. Still another object of the present invention is to provide an expandable home automation control system which displays plan views of the home and allows the user to create certain moods. A still further object of the present invention is to provide an expandable home automation control system which provides multiple dimensions of interaction and control. Another object of the present invention is to provide an expandable home automation control system which utilizes multifunction display monitors to display multiple types of video imagery. A further object of the present invention is to provide an expandable home automation control system which utilizes an electronic interface to a multi-zone security system to thereby allow touch or cursor-control of the security system by means of graphics displays. Still another object of the present invention is to provide an expandable home automation control system in which spoken messages are utilised as prompts for user input. Briefly described, these and other objects of the invention are accomplished in accordance with its apparatus aspects by providing a home automation controller which is designed to support multiple numbers and multiple different types of data communications with external systems. Such a controller is unlimited in its means of communications with external systems since it is fundamentally compatible with all forms of data communications. The system controller utilizes a microprocessor-based computer with its associated memory and storage subsystems. The processor is connected by means of a high speed data bus to a plurality of RF and serial interfaces, external custom interfaces and external network interfaces. The external parallel and serial interfaces are connected to various external systems. If necessary, optional protocol translators are connected between those external systems and the external interfaces. The external custom interfaces are connected directly to various external systems, while the external network interfaces are connected directly to multiple external systems. By means of such structure a compatible microcomputer can be utilised to control various products and subsystems within a home or commercial building, such as lighting systems, security systems, various sensors, windows, multiple external terminals, and to allow for input of commands by a variety of means such as glass, mobile phones, tablets, screens, voice recognition, telephones, custom switches or any device capable of providing input to a computer system. The method of the present invention is carried out by use of a central server which is a standardised, modular, software program that is configured for each installation. Secondary processors are utilised to relay information to the central processor, or to translate central processor commands into commands their dedicated devices can understand. A mesh network topology is currently utilised. The secondary processors manage the voice recognition and voice synthesis subsystems, telephone communication subsystems, screen subsystems, hand-held remote control unit communications, input/output control and monitoring, security/fire safety and monitoring and other intelligent subsystems such as lighting, audio, video and HVAC. Detailed Description A block diagram representation of the expandable home automation system of the present invention. The instant expandable home automation system is built around a linux based microcomputer. A linux based microcomputer or any computer equaling or exceeding the computing speed or processing power a raspberry pi may be used with the present invention. The microcomputer together with its random access memory (RAM), expanded RAM memory and associated hard disk drive is expandable. The central processor is connected, by means of a data bus, or its equivalent, through a plurality of standard or custom interfaces to either control each of the subsystems automated within the home environment or to transmit or receive either data or instructions from within the home environment.For example, for control purposes, the central processor is connected by means of the data bus to a multiple serial interface which in turn is connected to a router then a mobile computer/tablet, mobile phone, a touch surface (glass, mirror or wall etc), etc. As will be described later, the user can instruct the home automation system, by means of the mobile devices, web capable devices, surfaces, voice recognition or gestures to carry out a particular task. Under those circumstances, the central processor receives the input from the appropriate input devices after it has been processed by the serial interface and then processes the appropriate instructions to perform the selected task. Multiple input devices are desired can be distributed throughout the automated environment. In order to increase the number of devices that may be used to control the automation system or to add additional devices requiring a serial interface, a router is required. In addition to controlling the instant automation system by means of multiple devices, the central processor may also be connected through a terminal by technical personnel during maintenance procedures. A standard computer connected to the local/external network can be effectively utilised. A receiver for a hand-held remote control unit may also be connected so that the user may be provided with a hand-held RF or other type of remote for commanding the central processor to perform the various available tasks. The central processor may also be connected by means of the bus to a first parallel interface which may be utilised to both receive and to output direct digital signals. A wide variety of switch control devices may be connected through this parallel interface, either directly, or through a process controller, to the switch control devices. The switches can each provide for the control of a plurality of devices and are connected directly to the process controller wirelessly, which, for this example, may preferably be a protocol translator. Both the relay output board and the input/output board may be connected to the electrical appliances or devices such as door locks, security gates, lawn lights, speakers, or any other switch controlled device. The relay output and input/output boards may also be connected to the plumbing related systems such as baths, showers, faucets, pools, spas, and fountains. The analogue board may be connected directly to the analogue sensors, which provide a voltage output indicating, for example, temperature, humidity, pressure, light level, distance, vibration, air quality, or any other useful parameter for automation purposes. The input/output board may also be connected to the digital sensors, such as security sensors, pressure mats, driveway sensors, status relays, or other digital indicator devices. The security system used in the automated environment is a combination of motion, temperature, humidity, light, pressure and has a built in image and video capture. The home security system is connected to router which is in turn connected to the central processor by means of a data bus or a wireless connection. It can provide simultaneous communications with multiple separate external systems. When thus configured, the data bus can communicate directly with any external system, laptop, tablet, mobile phone, telephone, etc. If, as is the case in the preferred embodiment, the external device uses a nonstandard protocol, the central processor can be used to convert between the recognised protocol and the protocol utilised by the device to be controlled. That same interface may also be used to connect the central processor and its data bus through a protocol converter, if necessary. As will probably be clear to those of ordinary skill in the art, the design of the inventive expandable home automation system described here in allows for the use of interface boards that are designed to be directly compatible with the data bus of the central processor. Such interface boards may then be literally "plugged in" in order to provide control and communications with an external system. Examples of such analogue sensors are those which monitor outside temperature, bath water temperature, etc. It is also possible with the present inventive expand-able home automation system design to connect to a completely external network such as a local intranet and the internet, by means of an Ethernet network interface board. Using such an interconnection, a wide variety of applications such as information retrieval and remote home automation control can be achieved, for example, utilizing multiple external terminals or the network file server, connected to the Ethernet network interface, which is itself connected by means of the bus to the central processor. The central processor may also be connected, by means of its bus, to a graphics interface, such as a Display Port, HDMI or VGA interface, which is in turn connected to a respective video display monitor which provides the user with information regarding the operation of the home automation system. Voice recognition circuitry may also be connected by means of the bus to the central processor. A speech processor may be connected to one or more wireless speaker/microphones. Such remote microphones provide a means by which the user of the present expandable home automation system can communicate with the system by requesting tasks by voice. The home automation system may provide voice communications to the user by means of one or more remote speaker, which in turn connected to the speech processor. In addition, the central processor of the instant home automation system may be connected to a home telephone system or a single line standard telephone system, by means of a telephone interface. The home automation system achieves an unlimited expandability and compatibility by using multiple readily available (in most instances) subsystems that can be added without limit. That function is achieved by the use of multiple flexible standard interfaces for controlling and communicating with a variety of external devices and subsystems as well as translating modules which are used to communicate with and control systems which use nonstandard communication links. A more detailed description of the hardware implementation of the present expandable home automation system follows.
TOUCH GLASS The present invention utilizes a high resolution colour graphics touch glass to control certain specific features of the instant expandable home automation system. While such method of display may be efficient for some limited home automation systems, it is only operable under circumstances where there are only minimal commands that can be given through such a graphics display. Where the automation system is more capable, it is useful for the touch glass displays to convey more information as well as to effectively highlight touch sensitive areas on the glass by means of an image projected on the surface. The present invention incorporates both the hard- ware and software necessary to provide screen resolution ranging from 640X350 pixels up to 7680X4320 pixels. Thus, depending upon the desired application and type of screen desired to be displayed, the touch glass display can be provided with as much resolution and colour as necessary. Such display flexibility allows the present home automation system to incorporate powerful graphics features for ease-of use, such as icons, pop-up windows, colour-coding, direct manipulation and detailed floor plans. It also allows the use of a wide range of colours in order to enhance the aesthetic appearance of the touch- screen displays and in order to colour coordinate them with the surroundings in a home, commercial or other environment. Furthermore, as additional features not presently available become available on newer versions of graphics cards, such as animation and video mixing, the standard bus structure of the present invention will allow those features to be readily and easily added to the home automation system. The present home automation system is also hardware compatible with many other readily available graphics boards which offer extended capabilities. For example, by incorporating a different graphics card and making minor modifications to the home automation system software, the present system can provide graphics up to 7680x4320 pixels. The system may also include high performance graphics display processor cards for advanced graphics effects including animation. Using the high resolution graphics discussed above in concert with high resolution touch glass, many functions in a home or other environment can be controlled. For example, audio/video entertainment systems can be actuated and caused to perform all of the functions available by use of their respective remote controls. This feature is accomplished by means of an infrared emitter device which is utilised, under control of the central processor, to input to the audio/video device being controlled the appropriate code for controlling the desired function. Alternatively, a direct network link or other electronic interface could also be used to control the audio/video device. Audio entertainment equipment, such as remotely controllable receivers, amplifiers, turntables, cassette players and compact disk players, Blu-ray players, TVs as well as projectors, can be controlled in the same manner. For example, a certain selection or series of selections can be made, using the touch glass, for a blu-ray player. An AM or FM station can also be selected from the 60 touch glass menu. In addition, the user can use any touch glass to distribute audio or video signals to any one or more high fidelity speakers or display devices contained in the home or other environment.
The instant home automation system also utilizes video monitoring cameras so that, for example, the user can elect to view the signal from such cameras on connected display devices including but not limited to TVs, tablets and mobile phones. Other functions which can be performed by the home automation system involve the filling of baths, pools and spas and the maintenance of a desired temperature in those facilities, as well as the control of any pumps associated with those facilities. The home automation system can also control individual devices and appliances such as kitchen appliances, exhaust fans, humidifiers, and dehumidifiers. It can control motorized devices such as skylights, draperies, furniture, walls, screens, ceilings, awnings, physical security barriers, door locks, and others. The home automation system can also control answering machines, voice mail systems, and provide maintenance reminders. Of course, all of these control functions can be extended to similar short term residential applications, such as boats, aircraft, office suites, conference rooms, auditoriums, classrooms, theatres, hotels, hospitals, and retirement homes. They also offer immediate application to the infirm or disabled population. Still other systems which may be controlled by the disclosed touch glass are sophisticated security systems with unlimited zones, ice and snow removal by energizing, for example, heaters built into sidewalks and driveways when precipitation is detected and the out- side temperature drops below a certain predetermined temperature, the opening of locked doors, the initiating of telephone calls and the logging of incoming and outgoing telephone calls. Information retrieval can also be accomplished utilizing the instant touch glass as well as lighting moods throughout a home by which, under touch glass command, certain predetermined settings for lights throughout the home can be readily set. In fact, any system in a home or other environment that can be automated can be effectively automated utilizing the high resolution graphics displays described herein in concert with high resolution touch glass. Such systems, in addition to those described above, would include telephone answering, controlling fountains or in-ground sprinkler systems, controlling kitchen and other appliances, controlling motorized drapes, windows and skylights and the scheduling of many of these functions and those functions previously described. In addition, local information display, such as time, temperature, etc., can readily be accomplished utilizing the instant touch glass. VOICE RECOGNITION The use of voice recognition is an ideal user interface device for a home automation system because it allows the user to control functions in the home simply by speaking. Voice recognition is also useful in certain commercial environments. The use of the high resolution graphics display together with voice recognition allows the present home automation system to provide high quality visual cues to the user of the present control options, feedback of whether the user's command phrase has been recognized and the results produced for the user as the result of a commanded function. One of the difficulties with the use of voice recognition in home and other environments is that such systems must perform under at least low level background noise and must be able to extract key words or phrases out of a speech or general noise background. The inventive expandable home automation system meets these performance specifications by the use of certain hardware, together with software to enhance the performance of the voice recognition functions in the home environment, as well a high resolution graphics display system.
The implementation of the voice recognition system described herein may be accomplished by means of the central processor and its interconnection to a speech processor which in turn is connected to a remote microphone and a remote speaker. As previously discussed, a hard disk storage device is connected to the microcomputer for permanent storage of trained recognition vocabularies. In addition, an extended RAM memory, is also provided for connection to the microcomputer to provide rapid loading of new vocabulary data. The speech processor may preferably be embodied in the present system by one tablet, mobile phone, laptop, personal computers, etc. The system software performs the basic voice recognition, vocabulary loading, and microphone control functions. Also, as described later herein, a portion of the software which operates the central processor is dedicated to optimizing the voice recognition parameters of the speech processor in order to maximize performance in a home automation or other environment. Using such a system, the user's voice can be utilised in the present home automation system to provide their immediate or scheduled control of complete living moods, which are comprised of macros of multiple commands, audio/video entertainment system control, such as selection of the desired audio or video source, selection of the source parameters, such as channel selection, play, fast forward, rewind, volume, base, treble, balance control, etc., as well as the audio distribution network, for example, selecting which set of speakers the audio or video system will send its output to, as well as providing a muting function for each of those sets of speakers. In addition, the system can also both control and switch between a plurality of security 60 video cameras, including panning, tilting, zooming of such security cameras. Finally, the voice recognition functions of the present home automation system can be utilised to control a complex home security system with an unlimited number of zones, which is connected to the central processor. When the voice recognition functions of the present home automation system are utilised in concert with the high resolution colour graphics display, immediate or scheduled control of many home automation features can be accomplished. Such features include direct control of the lighting within the home, or, if preferred, controlling the lighting mood (i.e. choosing from one of a predetermined number of preset lighting levels) either in one room or throughout many rooms of a home. Such a system can select complete living moods, which consists of macros of multiple commands, for various lighting, temperature, open or closed draperies, skylight settings, and entertainment system settings that have been predetermined and stored as data files. The voice recognition system, in connection with the high resolution colour graphic display, can also provide home appliance control as well as security system control for components connected to the central processor, as has been previously described. In addition, the audio/video entertainment system controls and audio distribution network described above in connection with the use of only a voice recognition system can obviously be controlled with the both the high resolution colour graphics display and voice recognition systems. Additional functions which can be controlled with the combination of voice recognition and high resolution colour graphics displays are the security camera switching and control, security camera function and the complex security system, as previously described above. Also, local and remote information retrieval, such as time and weather and access to various remote data bases can be achieved using the voice recognition and high resolution colour graphics display combination. Control of the locks for various doors of the home as well as bath and spa controls, for example, to run the bath, set the temperature and turn on any pumps or motors, as previously described, can also be achieved by the use of the voice recognition system and high resolution colour graphics display system combination. In addition, the telephone system can also be controlled by that combination. MULTIPLE TYPES OF USER INPUT DEVICES The present home automation system, as has been previously described, by means of its extended bus and ability to interface with a wide variety of devices, can support any user device that can be controlled serially, network port, wirelessly, or with a custom interface compatible with the standard bus used by the inventive home automation system. Since both the bus and number of serial or parallel ports can be increased as desired, the number of interface devices and thus the number of devices in the home environment or commercial environment that can be controlled can also be increased as desired. That allows the disclosed home automation system to be configured to meet the exact needs of each home owner or business user. The system can thus be tailored to the layout of the environment to be controlled. For example, a screen display can be provided in the kitchen, a touch switch with nine buttons can be placed near the exterior doors, voice recognition can be installed in the master bedroom and any inside telephone can be used to control the system. It should also be remembered that an important aspect of the present invention is the commonality of use between input devices. For example, both the computer keyboard and the voice recognition system use the screen displays for cues and visual feedback. Wall-mounted telephone touch-tone buttons can be used to control a subset of the screen commands. In addition other user interfaces may be added to the home automation system, such as gestures, mice, joy sticks, light pens virtual keyboards and other cursor control devices. Voice recognition may also be used over the telephone. In order to implement the use of multiple types of user devices in a modular manner which allow for different types of devices in different parts of the home to be used to control the home automation system described herein, the following equipment may be simultaneously connected to the home automation system central processor: (1) a standard PC computer keyboard 18; (2) plurality of dry contact switches and may be connected to the central processor by means of a solid state input module connected to the bus of the central processor; (3) multiple screens connected wirelessly or wired to the central processor by means of the router or bus; (4) a plurality of hand-held remote receivers may be connected to the central processor by means of the router. Infrared and RF receiver; (5) a telephone system with a plurality of hand sets, or a single line standard telephone system may be connected to the central processor through a telephone interface and the speech processor; (6) multiple voice recognition locations may be connected to the central processor through microphones or other wireless microphone systems which are in turn connected to respective speech processors running standard voice recognition software which themselves are connected to the central processor; and (7) multiple voice response locations with remote speakers which provide spoken information and instructional cues to the user. All of the foregoing equipment is simultaneously connected to the central processor and may be used to perform a plurality of functions. TOUCH, SLATE, TABLET, SMART PHONE OR CURSOR-CONTROLLED GRAPHICS DISPLAY One of the most important functions of a home automation system is the scheduling of actions in the home environment to be performed automatically by the system at a future date and time. The home automation system described herein incorporates dynamic, object- oriented, touch or cursor controlled graphics displays for effectuating these scheduling functions. One way in which the scheduling functions are accomplished is through the use of a dynamic graphics representation of clock faces, either 12 or 2 hour, calendar pages, either weekly, monthly or annually, time lines, or other pictorial representations of day, date and/or time as a data input/output display for entering and reviewing schedules in a home automation system. A screen or other cursor control device, is used to move the hands on the clock, select a day on the calendar, move an "on-flag" to a point on a time line, or otherwise select, in a dynamic, graphics manner, a date or time to schedule a home automation function. This type of system can also be utilised with other stand-alone systems such as lighting systems, entertainment systems, security systems, energy management systems and others. Functions that can be scheduled in this manner include turning a light or appliance on or off, watering the lawn, turning on different lighting moods at different times of the day, turning on audio music or a television station, recording of television programs, operating modes or features of the home automation system itself, operating modes of a security system, electric locks, reminders for the system user, operating modes of a telephone system and automatic information retrieval. As will be described further herein, the same types of displays may be used directly to control entertainment systems by working with a graphics representation of the entertainment system and its distributed speakers and sources. The user controls the system by selecting his audio or video source from a graphics display and then selecting the rooms in which he would like the audio and video played. This particular feature of the inventive home automation system is implemented primarily as a software function of the instructions run by the central processor. It is communicated to and receives instructions from the user by implementation on the high resolution colour graphics display monitor. First, a background screen is displayed on the monitor with the appropriate graphical representation of the scheduling system, for example, a monthly calendar, a start clock and a finish clock. Using a screen or other cursor control device such as a mouse or light pen, the user can dynamically select and/or move graphics elements to indicate a date and time to start and stop any action the home automation system is capable of performing.
A preferred embodiment of the main menu screen which provides a plurality of different functions which may be called from the main menu. Such functions include control of the audio and video functions of the home automation system, security management functions, lighting moods, information, such as retrieval from remote data bases, environmental control, such as temperatures throughout the home etc., convenience features such as calling for a bath to be drawn and a certain temperature to be maintained and the system configuration functions. When the user places his finger inside the appropriate box, the system determines which function has been selected, highlights that function box by, for example, changing the colour of the box, and then displays the first (and sometimes only) sub-menu associated with that function. Each of the screens also contains two feature boxes in common, namely the Back and Quit function boxes. The back function box, as the name implies, functions to take the user back to the previous screen display. The quit function box, on the other hand, functions to end a session of using the screen and return the system to a dormant mode which may also display a custom introductory graphics screen or simply a blank screen which, when touched, will again cause the system to display the main menu screen. The audio/video screen sub-menu presents the user with the various functions which can be controlled by the present home automation system as disclosed herein. The system can control any video playback device, a television, select certain AM or FM frequencies on a receiver, control a compact disk player, distribute audio throughout the home, be used to operate the volume, tone and other controls associated with the audio and video equipment controlled from the audio/video screen, and turn the audio/video system off. Obviously, additional similar pieces of equipment and functions may likewise be controlled. The screen allows the user to select an audio source which may then be distributed, selectively, throughout the establishment through high fidelity speakers placed strategically in different zones throughout the establishment. The selections include the output from the music player, from an amplifier, from a television and also a function to turn the distributed audio off. An audio sub-menu touch- screen which is displayed after an audio source is selected or after an intervening menu for a large house offers floor selections, and allows the user, by touching the appropriate portions of the particular level of the house shown as a floor plan on the touch- screen, to cause the audio output from the selected source from the screen menu of played through the speakers present in the particular rooms selected by the user. The screen menu also provides two function boxes, marked as upper level and ground level, so that the other levels of the home can be selected to be shown on the screen. In that manner, the desired rooms on all floors of the home may be included to play the output from the audio source chosen by the user. Obviously, other graphics designs are possible. In addition to selecting one of the plurality of levels of the home as a function, which would then display, for example, the entry level sub-menu screen, the user may also select to view the output of video cameras distributed outside or throughout the home, for example, a video camera at the front door or in the garage. The video output from those cameras is displayed on the any video output device. A security event log function may also be selected. Also, the air conditioning or heating blowers may be re-enabled by the user after they have been automatically shut down by the automation system as the result of a smoke or fire detection by the security system. The sub-menu screens which appear when the entry level function is selected from the security management sub-menu screen. The screen is dominated by a floor plan schematic of the entry level section of the home. By touching the appropriate secured zone, the user can arm the security system, or disable a particular zone, depending upon the current state of the security system and the selected zone. The system status information indicates that the system is ready to arm. In a block above the system status information, a zone key is provided. A password is used by the system in order to prevent unauthorized control of certain functions, for example, the security function of the instant home automation system. The screen provides the usual back and quit functions and, in addition, provides for scroll up, scroll down and clear log functions. In the centre of the screen, data is shown relating to prior intrusions in each of the zones. The lighting mood sub-menu screen which is reached from the main menu screen. Each of the available functions, namely early morning, normal day, normal evening, day party, evening party, romantic evening and night lights set predetermined lights within the home to predetermined settings which are stored as data files on the hard disk of the home automation system and are called up merely by the user touching the appropriate function box on the lighting mood submenu screen. The information sub-menu screen which may be accessed directly from the main menu screen. The available functions cause the telephone system in the home to dial a telephone number stored in a data file which accesses a remote data base. Once the data base is reached, the information, for example, in the case of weather information, is disseminated to the user either through spoken report by means of the high fidelity speakers placed adjacent to the screen currently in use, or through a high resolution graphics display. Thus, the data can be presented in either audio or visual format. The software to schedule a device using a graphics based system as described above, involves using the graphics capabilities of the display generator in a way to produce pleasing picture elements that can be moved under cursor/touch/gesture control. The result will be both visually pleasing and easy to understand and use. Obviously, the software can be implemented in a variety of ways, depending upon the specific screen design. The user selects the scheduling screen options subroutine and then selects the option desired. The scheduling screen options subroutine is accessed using the system configuration function from the main menu. The program then determines whether the option selected is the start date and, if it is, displays the graphics for the start date and accepts that inpu. Once the user makes his selection for the start date, the program returns to A, displays the scheduling screen and options and the user then picks the next option to be scheduled. In that manner, the user is led through a sequence of steps which result In the display, input and acceptance of the scheduling data for the selected device. If the option selected is not the option start date, then the program determines whether the option selected is the start time. If it is, then the graphics for the start time and acceptance of the start input are displayed and the user is able to enter the start time. The program then returns to A, displays the scheduling screen and options and waits for the user to make another selection. If the option selected is not the start date or start time, the program then determines whether the option is the stop or finish date. If it is, then the graphics for the stop date and acceptance of the stop date input are displayed. Again, the user selects the stop date, requests that the stop date input be accepted and the program returns to A displays the scheduling screen and options 902. If the option selected at 902 is not the start date, start time or stop date, then the program queries as to whether the option selected is the stop time. If it is, then the display graphics for the stop time and acceptance of the stop time input are shown on the video display monitor. The user selects, using the screen, the stop time and the program returns to A to display the scheduling screen and options. In the event that the option chosen by the user is not a start or stop option, but it is an option which indicates that the scheduling has been completed, the program stores the schedule and returns to the main program.
As previously described, this entire sequence can be accompanied by verbal prompts, generated by the voice recognition and speech synthesis circuitry, to prompt the user through the scheduling scheme. Many different types of events can be scheduled in this manner, such as a lighting system event, control of the audio/video entertainment system, the energy management system, individual appliances and the configuration of the system itself. Thus, system features and functions can also be scheduled by the user, such as spoken alerts, passcodes, etc. A monthly calendar can be shown but also provides function boxes which, upon being touched, display either the month before or the month following the currently displayed month. The user selects a day of the month merely by touching that date on the screen. Other function boxes provide for a weekly cycle selection, copying the time selected, cut- ting and saving a schedule for use with another function or cancelling the date. INFORMATION RETRIEVAL The present home automation system provides an integration of tailored information and communication services in a home automation system environment with high resolution colour graphics displays. Since the system disclosed herein can be connected to the outside world by both telephone and modem, it provides the capacity for remote communications and information retrieval. There are a plurality of ways in which the home automation system disclosed herein can be connected to sources of external information. Three of these are specifically shown as the telephone system, which is connected to the standard telephone network, the Ethernet network interface and the multiple external terminals, and a network file server which together form a local area network, and the modem which may be connected to a plurality of remote data bases. Each of those modules is ultimately connected by means of the data bus to the central processor. Utilizing, for example, the proper telephone interface, and a speech processor, the present home automation system can make and answer audio telephone calls. This means that the home automation system can be directed from a screen to make telephone calls to information services which provide audio information. One such application is to retrieve single message recordings such as daily weather reports, financial reports or sports reports. An additional application is to access voice "databases" such as individual stock prices 25 or checking account balances, which requires that the home automation system send touch tone signals to retrieve the proper information. Alternatively, the information can be requested and retrieved in a digital format from a wide variety of available data bases utilising the modem. The same is true when using a connection to a local area network that could provide local community information or, through a gateway, access to remote information. The present invention, by its combination of information retrievable with high resolution colour graphics, provides advantages both in requesting and in displaying information. For example, when requesting information, the displays can provide a well organised menu of options so that, with a single touch, the desired information can be retrieved. For displaying information, the colour graphics capability allows the received information to be reformatted into highly readable displays or even into charts and graphs. In the future, as the technology used by on-line information services improves and additional services become available, pictorial or other information will be able to be retrieved and displayed by the home automation system, as well.
The foregoing types of information can also be re- quested with any wired or wireless cursor control or general input device, such as a mouse, screen, light pen, joy stick, keyboard, gesture or voice recognition system. MULTIDIMENSIONAL INTERACTION AND CONTROL The present home automation system provides the capability to the user to interact with and control a home in a variety of modes in order to increase both the capability and ease of use of the home automation system. For example, the present home automation system incorporates high resolution colour graphics displays that are essentially menu oriented, partially floor plan oriented, and partially mood oriented. Each type of display has advantages in different control situations. Floor plans are best used when direct control options are required for many specific devices. The present home automation system utilizes floor plans to activate and deactivate individual security zones throughout the house by touching the location of that zone on a floor pian, using a screen. The present invention also, however, provides for "mood" control. Moods are macro instructions of multiple commands that allow single commands to set an entire living environment in the home, such as all the lighting and music for an evening party. On the other hand, menus offer the best means to select options that are not necessarily related to a location in the home, such as a command to play a. particular compact disk, request a certain type of remote information, or to switch to another sub-menu of options. The system is provided with the ability to use the floor plan mode to create the moods to be used in the menu of moods. Thus, the floor plan displays create and define lists of tasks to be performed in each particular mood. The present home automation system allows the use of a wide variety of video options. For example, by use of certain display driver cards, a range of video resolutions from high resolution displays for use with detailed floor plans to lower resolution menus can be shown. That provides for faster loading of the information into the central processor and out to the monitor. Also, by using a monitor that can show both standard video as well as the high resolution display graphics, the display screens can double as televisions or security monitors. This function is accomplished by the use of a multi-resolution video display driver. INTERFACE TO A MULTI-ZONE SECURITY SYSTEM Another capability of the present home automation system is its ability to interface to multi-zone security systems and to allow user control of such security systems by means of interactive screens and other alternative control interfaces. The present home automation system is designed to interface to and take advantage of the advanced features offered by sophisticated security systems Thus, the home automation sys- tem is designed to operate with a built in or external security system, which provides several advantages. First, the present system can be performing control tasks while the security system is calling the alarm service. One of those control tasks can be contacting someone else by telephone as a backup to the alarm service. Another advantage of being designed to operate with an external security system is that the present home automation system can monitor the performance of the security system and can keep a separate log of all security events. Still another advantage is in the use and power of the interaction by the user with the security system by means of a high resolution colour graphics display and cursor control devices. Such an interaction allows the user interface to be greatly improved and simplified over those previously known. Further, the home automation system, since it is designed to operate with external security systems, can be utilised with the very latest security systems and can take advantage of improvements and updates to such security systems. Another advantage of integrating a security system into the present home automation system is the ability to provide information to the user while announcing the alarm. The system can speak the location of the intruder or fire while displaying that location on a floor plan screen of the home. It also turns on all lights in the house to light exit routes and shut down heating system blowers to minimize smoke circulation in the house. In addition to connecting the home automation system to the security system bus, it listens to and interprets all data instructions generated by the security system on its bus, and translates commands from the home automation system to the security systems, data protocol so that the security system will receive the commands and respond appropriately. Besides allowing the security system to receive and respond to commands from the central processor, which are ultimately generated by the user of the system, the home automation system is also able to record security events on a real time basis. SPOKEN MESSAGES AS CUES The present home automation system also provides for the use of spoken messages as cues for screen use, providing a great level of ease of use for non-technical users of the system. The present home automation system provides spoken cues to help guide the user when using screens to the next step. The present home automation system incorporates hardware and software to support the playback of high-fidelity, digitally recorded messages to the user. For example, when the user touches one of the screens to request control of a type of function, a spoken phrase is played through one of the remote speakers as each new screen graphic is displayed. For example, on the main menu, the user mayselect the "Security Management" option. If the user lived in a large house, his next step might be to select a floor or area of the house he wishes to control. By speaking the phrase, "please select your floor" upon displaying a floor selection menu, the user is assisted in continuously understanding the next function to be performed. The foregoing function of the system is performed in software by calling a speech subroutine as each new graphics screen is displayed. The speech information is stored in the extended RAM memory of the processor, which has been previously described. The home automation system utilizes the speech processor and a remote speaker located near each screen location in order to provide the spoken cues to the user while he is using the screen, THE SOFTWARE SYSTEM As has been described herein, the present expandable home automation system is a distributed processor system, in which a central processor is integrated with a collection of secondary processors associated with the various interface elements. The disclosed home automation system is preferably arranged with a star topology, in which the secondary processors serve to relay information to the central processor, or to translate commands received from the central processor into commands that their dedicated devices can understand. As has been described, the secondary processors manage the following subsystems: voice recognition and voice synthesis; telephone communication; screen communication; hand-held remote control unit communication; input and output control and monitoring; security and fire system safety and monitoring; and other optional intelligent subsystems, such as lighting, audio and video, number plate recognition and HVAC. While the secondary processors generally run standard software available from the interface manufacturer, the instructions for running the central processor, however, have not been described and will thus be described in this section of the application. The instructions have been created in a standardized and modular form such that the instructions may be configured for each different home environment, as the user desires. The instructions consist of six major portions, which are described hereafter. Those portions are (1) the Subsystem Interrupt Handler; (2) the Internal Scheduled Interrupt Server (ISIS); (3) the Polling Loop; (4) the Tasks; (5) the Device Drivers; and (6) the Support Environment. Briefly, the Subsystem Interrupt Handler operates in the background in order to receive messages from the secondary processors and to build received message queues. The Internal Scheduled Interrupt Server handles the scheduling of timed events. The Polling Loop monitors the message queues and the ISIS and calls the appropriate Task. The Tasks respond to the messages from the secondary processors. The Device Drivers translate internal commands into the instructions necessary to execute the desired function. The Support Environment consists of the additional task switching apparatus and internal tools that allow the various subsystems to function in a cohesive and multitasking manner. THE SUBSYSTEM INTERRUPT HANDLER All of the tasks and devices based on a serial line, such as the screen, handheld remote control unit and security system, are supported by the interrupt handler, running in the background. When a byte of data is received at any port, an interrupt request line is raised, which causes the interrupt handler to execute. The interrupt handler determines which serial port caused the interrupt, copies the new data into that port's queue and then increments that queue's pointer. When the interrupt request line is raised the interrupt handler begins to execute. Initially, the interrupt handler jumps to the interrupt handler routin. The interrupt handler routine then determines, by asking each connection bus, which connection generated the interrupt which caused the interrupt request line to be raised. Upon determining which line generated the interrupt, the appropriate data byte is pulled from the identified serial lines and is placed in the appropriate queue. That queue's pointer is then incremented. If the queue pointer is greater than the queue size, then the queue pointer is wrapped around. The interrupt handler routine then ends and returns to the normal program. THE INTERNAL SCHEDULED INTERRUPT SERVER (ISIS) The Internal Scheduled Interrupt Server or ISIS is a tool available to any task, device driver or ISIS event. It allows a routine to schedule a software event to occur any time within the next 24-hours. The ISIS load event routine is called with an event time, event number and optional data. The new event is inserted into the ISIS queue in accordance with its event time. The top of the 30 ISIS queue is checked in the Polling Loop, and, if the current system time is equal to or greater than the time of the top event in the queue, that ISIS event is removed from the queue and then executed. The types of ISIS events available are monitoring, scheduled subsystem events, wait for acknowledgment, wait for data, and support environment events. As previously described, when this tool is called from the Polling Loop, the instruction at the top of the queue whose time has been passed is to be executed. THE POLLING LOOP The Polling Loop forms a part of the virtual multitasking environment utilised by the instant home automation system, since all tasks are executed incrementally depending upon inputs noticed by the Polling Loop. The Polling Loop processes and checks the various inputs in priority order from high to low. Whenever an input presents data to be processed, the Polling Loop executes an immediate jump to that input's task. When the task has completed executing its current state using the new data, it immediately jumps back to the top of the Polling Loop. However, some tasks may immediately execute their next state without returning to the Polling Loop if more data is available. In its preferred embodiment, the Polling Loop monitors the following input sources in order of priority: 1. Fire/security 2. Fire/security(optional) 3. Internal scheduled interrupt server 4. 1/O controller 5. Keyboard 6. Voice recognition board 7. Voice recognition board(optional) 8. Master screen(1) 9. Secondary screen(2) and other screens 10. Light Controller 11. Telephone control interface 12. Appliance Controller 13. Motor Controller 14. Other intelligent subsystem interfaces From the top of the Polling Loop, a determination is made as to whether new security or fire data has been received from the serial line, the serial line queue which corresponds to one line from the serial interface. If new data has been received, the data is interpreted and, if found to be valid data, is sent for execution to a task state decoder which selects which section of code or state in the Task will process the new data. The program then returns to the top of the polling loop. If the new data is determined to be invalid data at the data interpreter step, then the program jumps immediately back to the top of the Polling Loop. All other tasks follow the same format of "data interpreter" and "state decoder." Further discussion of specific Tasks follows.
If no new security and fire data has been input, then a determination is made as to whether there is new security and fire data. If there is, then Task 2 is executed based upon that data and then the program returns to the top of the polling loop. If no new data is in the serial queue at the step for determining whether new security/fire data has been input, then the program next determines whether the current time in the Internal Scheduled Interrupt Server event queue is greater than the next event in the queue. If it is, the Internal Scheduled Interrupt Server subroutine is called. After the execution of the ISIS subroutine, the program returns to the top of the Polling Loop. If the determination of whether a key has been pressed is negative, then a determination is made as to whether data has been received from the screen. The program then jumps to the touch screen or master screen subroutine. The program then returns to the top of the Polling Loop. The second half of the Polling Loop, a determination is then made as to whether there is new screen data by examining the serial queue connected to touch- screen. If there is data present, then the program executes Task for the screen and jumps to the screen subroutine, which is similar to the touch- screen subroutine. Note that additional secondary screen tasks may be inserted in the Polling Loop at this point. If no new screen data is present at the serial queue, a determination is then made as to whether a particular function switch, for example, a serial wall switch has inputted data on serial line to the serial queue. If new function switch I data is present the task associated with that particular function switch is executed and the program then returns to the top of the Polling Loop. If no new function switch data is present, then a determination is then made as to whether new function switch data is present on serial line in serial queue. If new data is present, which corresponds to the function switch, is executed and the program returns to the top of the Polling Loop. Note that additional function switch tasks may be inserted in the Polling Loop at this point. If the current state is determined to not be the neutral state, then the system checks voice recognition to determine whether a template number has been returned from the currently active voice recognition board. If it is determined that a template number has been returned, the program proceeds to again determine whether a template number has been returned. If a template number has been returned, then the recognition score is displayed and the simulated touch coordinates are set to the middle of the screen. The system then jumps to the master screen immediate response module. In the event that no template number has been returned at step or if the voice recognition is determined to not have been enabled, the system moves to the screen. THE TASKS A Task, as that phrase is used herein, is a software module that supports one specific piece of hardware within the disclosed home automation system. The communication protocols of both the tasks and the secondary processors are designed in such a manner to allow the Tasks to run in a high-level simulation of a multitasking system. Each task is broken up into sub- tasks, which are in turn themselves divided into single "states." Each state is designed to execute in one second or less, and, on completion, returns to the top of the Polling Loop. Transitions between states are triggered by the interrupting events recognized by the Polling Loop.
The various devices controlled and monitored by the home automation system described herein are assigned to tasks in the following manner. Every secondary processor which communicates with the central processor is assigned its own task. Devices on the internal bus, which use an Interrupt Request The assignment of tasks to actual hardware occasionally demands that more than one device be tied to one task. Furthermore, the task number assigned describes that task's priority. That is, task I is the highest priority task in the system. The task assignments for the preferred configuration are set forth below. Task 1: Fire and security interface Task 2: Second fire and security interface (optional) Task 3: Input and output controller interface Task 4: Master screen and voice recognition boards Task 5: Secondary screen (optional) Task 6: Serial wall switch 1 bus Task 7: Serial Wall Switch 2 bus (optional) Task 8: Telephone control interface Task 9: Other intelligent subsystem interfaces All tasks are structured in approximately the same manner and consist of the following pieces: (1) data validation routine: (2) immediate response routines: (3)neutral state: and (4) contextual states. Each of those components is described further below. When a Task receives data, it must first evaluate it. The data validation routine (DVR) consists of a sequence of states, in which each state evaluates one byte of data. If the byte is rejected, the state resets the command. If the byte is not rejected, the state increments the state variable to the next command state and returns to the Polling Loop, thus allowing the next byte of data to cause the next command state to execute. The communication protocol for that task's device is encoded into the command, in order to form the criteria by which the Central Processor will accept or reject data. The central processor also simultaneously decodes the data to provide necessary information to the remainder of the task. Immediate response routines-when a valid complete transmission has been received, most Tasks will perform some immediate action, such as acknowledging the transmission or displaying a cursor on the touch- screen. These immediate responses are performed regardless of the current state of the Task. Once the responses have been performed, the software will immediately jump to either the Neutral State or to the appropriate Contextual State.
Neutral State-The Neutral State is a catchall state which is executed when the Task has not been placed in any other (contextual) state. Most unpredictable and unexpected events, such as a fire alarm, are processed by a Neutral State. The Neutral State also serves to ignore garbled or meaningless transmissions. Contextual States-Contextual States form the majority of the intelligence of the instant home automation system. When the data reaches the current Contextual State, it is acted upon in the context of that current state. For example, data indicating that the fourth touch box of the screen menu has been touched when the screen task is in the "lighting" mood Contextual State would be evaluated on that basis, resulting in some action, such as calling the lighting mood device driver to set lighting mood number. Some Contextual States may jump to the Neutral State if the data does not make sense to the current Contextual State. However, when that occurs, the Task's state variable will not change, and will still direct future data to the same Contextual State. From the top of the Polling Loop, when the determination of whether there is new screen data is in the affirmative, a determination is then made as to whether that data is valid. If the data is not valid, then the task is terminated and the program returns to the top of the Polling Loop. If the data is valid, a determination is then made as to whether sufficient data has been received to calculate the touchpoint. If insufficient data has been received, then the state variables are set such that further data will be used to complete the calculation, the task is terminated, and control returns to the top of the Polling Loop. If, on the other hand, sufficient data to calculate the touchpoint has been received, then a determination is made of whether the touchpoint is greater than 7 pixels from a prior touch and, if so, a new cursor is displayed 604 A determination is then made as to the state of Task 4 which causes the program to jump to a sub-task of Task corresponding to the function selected by the user. For example, the program can jump to the dormant screen, to the main menu, to the lighting mood menu, to the audio/video menu, or to other menus to control other functions described in this application. A different subsystem can be controlled from each of those menus. After performing the Checktouch subroutine, the main menu subroutine determines whether more touchscreen data is pending. If there is more screen data pending, then the program returns to the top of the Polling-Loop. If there is no more screen data pending, then the subroutine removes the return to dormant screen ISIS event, also described later as a "Screen Timeout" ISIS event, and then, according to the last box that had been touched from the screen, goes to the appropriate subroutine. For example, the program can return to the dormant screen. If the last box touched on the screen is the lighting mood menu, then the program will jump to the subroutine which loads the lighting mood menu and sets the state of the Task for the lighting mood. The program then returns to the top of the Polling Loop. If another box from the main menu screen has been selected, such as the audio and video menu, the program jumps to the subroutine which loads the audio/video menu data and then sets the state of the Task 4 to the audio/video menu. The program then returns to the top of the Polling Loop. If more screen data is pending, the program returns to the top of the Polling Loop. If no more screen data is pending then, depending upon the box touched on the screen, the program jumps to one of a plurality of subroutines. For example, the program can return to the main menu. It can return to the dormant screen, it can set the lighting mood selected by calling the appropriate Device Driver which then shows a blank lit box and the program then returns to the top of the Polling Loop. The functions which are performed when the program returns to the main menu. After the program branches back to the main menu, the main menu subroutine removes the prior "return to dormant" interrupt server event, sets a new "return to dormant screen" ISIS event 648 and then loads the main menu graphics and data and sets the state of Task 4 to the main menu 650. The program then returns to the top of the Polling Loop. In the event that the return to dormant screen subroutine is called out from, for example, the lighting moods menu, the dormant screen subroutine is implemented. Upon returning to the dormant screen, the subroutine removes the prior "return to dormant" interrupt server and then loads the dormant screen graphics and data and sets the current state to the dormant screen. The subroutine then returns to the top of the Polling Loop. The Checktouch subroutine functions to compare the touch- point of the user with the touch boxes of the menu shown on the screen. From an active menu, a determination is made as to whether the touchpoint of the user is within any of the menu's touch boxes. If the touchpoint on the screen is within any of the boxes on the menu, then the box touched is lit up. If the touchpoint of the user on the screen is not within any of the menu's touch boxes, then, if a box was previously lit, it is blanked. After the appropriate box has been lit up, if a previous box had been lit, then that box is blanked. In either event, after a determination is made as to whether a box was previously lit and, if so, it has been blanked, the program returns to the active menu from which it jumped to the Check- touch subroutine. If a new screen data is present, then the master screen, is executed. Initially, a determination is made as to whether a new byte is present in the master screen queue. If a new byte is present in the master screen queue, thenthe program determines which of the V-states the Master Touch- screen Task or subroutine is currently in and then jumps to the entry point of the master screen subroutine for the particular V-state condition. A V-state, for purposes of this application, is a state within a data validation routine of a Task or subroutine. V-state is also the name of a variable used by the state decoders of the present system to select which state within a Task or subroutine is the current active state. Once the appropriate V-state is selected, the program goes to that validation state. Assuming the current V-state is V-state 0, the program then determines whether the bits of the first byte in the master screen queue indicate that the new byte is the first byte of a four-byte group describing the touchpoint. If the bits are synchronised, then the system calculates the upper half of the screen X coordinate, increments the current V-state and then makes a determination as to whether more master screen data is available. If no more master screen data is available, then the program returns to the top of the Polling Loop. If a determination is made that the bits of the first byte are not synchronised at step, then the reset synchronisation routine is executed. If the determination is made that the current V-state is V-state 1, or if more data from the master screen is available, then the program jumps to determine whether the bits in the second byte from the master screen queue are synchronised. If they are, then the system calculates the lower half of the screen X-coordinate and then the entire screen X-coordinates. The program then increments the current V-state and makes a determination as to whether more master screen data is available. If no master screen data is available, then the program returns to the top of the Polling Loop. As described above in connection with V-state 0, if the bits of the third byte are not synchronised, then the program executes the reset synchronisation routine. In the same manner as described above in connection with V-states 0 and 1, if the current V-state is V-state 2 or if more master screen data is available, a determination is made as to whether the bits in the third byte are synchronised. If they are, then the upper half of the screen Y-coordinate 25 are calculated at 734, the current V-state is incremented at 736 and then a determination is made as to whether more master screen data is available. If there is no more master screen data available, then the program returns to the top of the Polling Loop. If, however, there is more master screen data available, or if the current V-state is V-state 4, then the program determines whether the bits in the fourth bit are synchronised. If they are, then the system calculates at the lower half of the screen Y-coordinates and then the screen Y coordinates themselves. The system then resets the V-state to the initial V-state 0 at 748 and then the master touch- screen executes an immediate response. The system operates in the same manner with respect to a determination as to whether the bits in the third and fourth bytes are synchronised as described above in connection. If a determination is negative, that is, that the bits within the bytes are not synchronised, then the reset synchronisation routine is executed, respectively. After executing the master touch- screen immediate response, the system converts the screen coordinates calculated in the master touch-screen validation routine to pixel coordinates. A determination is then made as to whether the particular touch of the screen by the user being analysed is greater than 7 pixels away from the last touch of the screen. If an affirmative determination is made, then the prior cursor is erased, a new cursor is drawn and the new touch coordinates are stored. As previously described, the cursor may be shown on the screen as a highlighting or an inverse video function of the selected menu item on the screen. If the new touch is less than 7 pixels from the last touch or the system sets a touch flag and then jumps to the Master Screen Task state decoder. Master Screen Task routine. From the Master Screen Task state decoder the system pauses for a predetermined amount of time, for example, 0.1 seconds. A determination is then made as to whether more screen task data is pending. If there is more screen task data pending, then the system jumps to the top of the Polling Loop. If there is no Master Screen Task data currently pending, then the system erases the cursor, and then turns on the speaker in the house closest to the active screen. The system then makes a determination as to whether there is an active phone call. If there is, then the message "I'm sorry, but the system is currently responding to a telephone call" is caused to be sent out over the speaker and the program then returns to the dormant screen. If there is no active phone calls, then the system determines whether the password protection function has been activated. If it has, then the password sub-menu is called which requests that the user input the password. If the inputted password is valid, or if password protection is not on, then the stored greeting selected by the user is sent out over the speaker and the main menu state is initialised. If a determination is made that the password inputted by the user is not valid, then the program returns to the dormant screen.
If the main menu state is initialised, the master screen subroutine jumps to the initialisation entry point, which is a flow chart of the general Contextual State of the master screen task or sub- routine. After the initialisation entry point, the system sends a "prompt" phrase over the speaker, such as "enter your selection", or any other phrase preselected by the user and stored as a data file on the system hard drive. The general Contextual State master screen subroutine then loads the main menu screen and indicates the status of the menu options. The speech recognition vocabulary, if applicable, is then loaded and the system then sets the state to the current Contextual State entry point and returns to the top of the Polling Loop. The entry point into the general Contextual State master screen subroutine from the Polling. The system then calls the Checktouch subroutine and then calls the display cursor subroutine. A determination is then made as to whether one of the boxes on the main menu has been touched. If none of the boxes have been touched, then the program returns to the top of the Polling Loop. Boxes are defined as areas on the screen which are marked off in some way to indicate the user should touch there to execute an action. For instance, the current maximum number of boxes a standard menu may have is 9. They are Back, Quit, and up to seven other choices. If one of the boxes has been touched, the program determines which of the boxes have been selected and jumps to the appropriate point. If the Quit box has been touched, then the system returns to the dormant screen. If the Back box has been touched, a determination is made as to whether the system is to go back to the main menu. If the answer is affirmative, then the system returns to the main menu. If the back function selected by the user does not refer to the main menu, that is, if there are prior sub- menus between the main menu and the current sub- menu, then the system removes the "screen time out" from the ISIS, inserts a new screen time out" ISIS event and sets the touch flag to 0. The prior sub-menu or state is then initialised. If the box selected, instead of being a function box such as the Quit or Back function is a menu selection box, then the program executes the selected action and determines whether the action selected requires branching to a sub-menu. If a new sub menu is required, then the system removes the current "screen time out" event from the ISIS and inserts a new "screen time out" ISIS event. That causes the next screen shown on the graphics display monitor to remain there until the current "screen time out" event is re- moved from the ISIS. The sub-menu selected by the user is then initialised. If a negative determination is made, that is, the program is not going to display a new sub-menu, the current highlighted box is blanked, thus removing the highlighting of the selected box on the screen. The active screen serial queue is then cleaned out and the system returns to the top of the Polling Loop. The display cursor subroutine called. After being called from a Contextual State of the master screen subroutine, the display cursor subroutine starts and then makes a determination as to whether more screen data is available. If there is more screen data available, then the system returns to the top of the Polling Loop. If no more master screen data is available, then the system pauses for a predetermined time, such as 0.25 seconds. A determination is then made as to whether more screen data is available. If there is no more master screen data available, then the cursor is erased and a determination is made as to whether a box on the menu has been touched. If more master screen data is available or if no box has yet been touched, the system returns to the top of the Polling Loop.
If a box has been touched from the screen, then the system highlights the touched box and then returns to the calling subroutine. At the initialisation entry point, the user has selected a security/fire option from the main menu and has selected from the security management menu. The security floor plan sub-menu subroutine then stores the selected floor number, loads the selected floor plan screen data, speaks a responsive phrase through the speaker and voice synthesizing system, such as "Please wait for response form the security system" and then displays a "please wait" message . The subroutine then sends a "zone map request" to the security/fire interface shown and described in connection sets the Security/Fire Task to the "zone map request" state; and inserts a "security response expected" ISIS event into the ISIS queue. The Master Screen Task state is then set and the system returns to the top of the Polling Loop. When a response to a zone map request occurs on queue 1 or 2 of the Polling Loop, the Security/Fire Task (Task I or Task 2) jumps to the security floor plan sub-menu subroutine, entering at point . Touches occurring while the system is waiting for a response from the Security/Fire Interface will show a cursor, but will not be compared with the touch boxes on the menu. From the zone map request entry point, the program then receives the type of result from the Security/Fire Task. Depending upon the result determined, the system may go directly to decode the armed or ready conditions of the security system and determine that the security system is armed or. Or, it may reach either directly or after informing the user of the response received. For example, if no response was received, the system speaks the phrase "no response" and then if, on the other hand, an "unexpected response" was received at step , then the system will inform the user by speaking the phrase "unexpected response", before moving on. After the "please wait" message displayed is erased from the screen of the monitor and the system then jumps to the show zone status entry point which is described shows the entry points for the redisplay security floor plans function, the show zones, status and the show security system status, in addition to continuing the flow chart from the substate decoder. When the redisplay security floor plans is jumped to, the system sets the fade colour to black, loads the floor plan screen according to the current floor number and then sets the fade colour to normal. The show zones, status entry point occurs after step. The program then decodes the zones, status and marks the zones on the screen as well as displaying the status of any open or shunted zones. After the system reaches the show security system status point. The security floor plan sub-menu routine of the master screen task then shows the armed or ready status on the screen of the monitor and determines whether the user has previously selected a zone. If the user had selected a zone, the selected zone is marked and the print zone status subroutine is called. If no zone is selected, the phrase "no zone selected" is printed or displayed on the screen. After the print zone status routine is called or the phrase "no zone selected" is 'printed, the system inserts a "screen time-out" ISIS event and sets the touch flag to 0. The state is then set to "analyse floor plan touches" and then turns to the top of the Polling Loop. After the current sub state is obtained from the substate decoder, the system then analyses whether any floor plan touches have been made. The Check- touch subroutine is first called and then a determination as to whether a box on the touch- screen has been touched is made. If no box has been touched, then a determination is made as to whether a new zone has been touched. If a new zone has been touched, then a determination is made as to whether the previous zone has been marked. If a previous zone has been marked, then the previous zone marker is erased and the new zone is marked. In addition to marking the new zone, the Call Print Zone Status subroutine is called. If an affirmative determination is made, that a new zone has been touched it causes the marking of a new zone and the calling of the Print Zone Status subroutine. The system then pauses for 0.1 seconds and then determines whether more master screen data is available. If an affirmative determination is made, then the subroutine returns to the top of the Polling Loop. If no more master screen data is available, then a determination is made as to whether a box has been touched on the current screen. If no box has been touched, then the subroutine returns to the top of the Polling Loop. If a box has been determined to have been touched, then, depending upon the function box touched, the security floor plan sub-menu subroutine will jump to the appropriate step to effectuate that function. Thus, if the Quit box has been touched, the subprogram will return to the dormant screen. If the Back box has been touched then the subroutine will remove the current "screen time out" ISIS event and insert new "screen time out" ISIS event. The subroutine will then initialize the security management sub-menu and display it on the display device. If the system arm/bypass function box has been selected at the box touched, then the system jumps to the arm/bypass system substate or subroutine. If the zone enable/disable function box was selected, then the system determines whether a zone has been selected and, if a zone has been selected, jumps to the shunt zone substate. If no zone has been selected, the subroutine returns to the top of the Polling Loop. If the highest alternate floor function box is selected, the subroutine sets the floor to the highest alternate floor and then redisplays the selected security floor plan on the screen. If the lowest alternate floor function box has been selected, then the subroutine sets the floor to the lowest alternate floor and then displays that security floor plan. The Print Zone Status subroutine initially erases any previous zone status message and then prints the selected zone's number, name and current status. A determination is then made as to whether the selected zone is shunted (that is, disabled) and whether the enable/disable touch box displays the message "disable zone". If the enable/disable touch box does display a "disable zone" message, then the "disable zone" touch box is replaced with the "enable zone" touch box. If a negative determination is made, then a determination is made of whether the selected zone is not shunted and the enable/disable touch box displays the "enable zone" message. If the determination of that decision is negative, the program returns at the beginning of the loop. If, on the other hand, the determination of that decision is affirmative, then the "enable zone" touch box is replaced with the "disable zone" touch box in a similar manner as described in connection with. The program then returns. If the arm/bypass system function box is selected, the security floor plan sub-menu subroutine branches to the arm/bypass system substate or subroutine and first removes the current "screen time out" ISIS event and sets the touch flag to 0. A determination is then made as to whether the security system is currently armed. If the security system is currently armed, then the arm flag is reset. If the security system is not currently armed, then the arm flag is set. After the arm flag is either set or reset at steps, the voice synthesis system causes the speaker to transmit the phrase "Please enter your security pass code" to prompt the user to input the appropriate security system password so that the system will permit a change from the armed or bypassed state to the other. The user either speaks or in some other manner inputs the password and the subroutine then calls a Get Password subroutine which captures the password. If no password is entered (a "null" password), then the current security floor plan is redisplayed. If a password is captured by the home automation system, then a determination is made as to whether the arm flag is set. If the arm flag is set, then an arm security system message and the password are sent to the Security/Fire Interface. The Security/Fire Task state is then set to the "arm system". If it is determined that the arm flag is not set, then a bypass security system message and the captured password are sent to the Security/Fire Interface and the Security/Fire Task state is then set to the "bypass system". After the setting of the Security/Fire Task state to either the "arm system" or "bypass system" states, respectively, the subroutine sets the "Security Response Expected" ISIS event and informs the user to please wait by speaking that phrase as well as displaying that message on the monitor. The system then sets the appropriate Master Touch- screen Task state and returns to the top of the Polling Loop. The Master Screen Task state is set such that the substate decoder will cause any touches on the screen to display a cursor, but the selected state will not compare touches with the touch boxes on the menu. Once a result has been returned from the Security/Fire Task or ISIS event, depending upon the result branches to one of six different places in the program. The result can be determined to be good, which implies that the Security/Fire Interface has acknowledged the command, in which case the program then inserts a "wait for bad password" ISIS event, sets the substate to prevent any user actions and then returns to the Polling Loop. If no "bad password" message is received from the Security/Fire Interface during the delay period, the ISIS event will return a result of "no bad password message during delay." The system then sends a zone map request to the Security/Fire Interface system interface, sets the Security/Fire Task state to "zone map request", sets a "security response expected" ISIS event, sets the appropriate substate and then returns to the Polling Loop. If no response is the result determined, then the system speaks the phrase "no response" and then redisplays the current security floor plan. That would occur, if, for example, the message to the Security/Fire Interface was garbled in transmission. If, on the other hand, a "bad password" result is returned, the system is caused to speak the phrase "unexpected response" and then redisplays the security floor plans. If the result is either "already armed" or "already bypassed", then the system determines whether the arm flag has already been set. If the arm flag had been set, then the phrase "already armed" is spoken to inform the user of that state and the current security floor plan is redisplayed. If the arm flag had not been set, then the system speaks the phrase "already bypassed" and then redisplays the current security floor plan. If the result is the return of the zone map, then the system decodes the armed/bypassed status and determines whether the system is armed or not. If the system is armed, then the phrase "system armed" is spoken and the current security floor plan is redisplayed. If the system is not armed, then the phrase "system bypassed" is spoken and the current security floor plan is redisplayed.
If the shunt zone substate is selected, then the program jumps to the shunt zone. The shunt zone subroutine then removes the "screen time out" ISIS event and says that sets the touch flag to 0. A determination is then made as to whether the security system is currently armed. If the determination of that decision is affirmative, then the system speaks the phrase "Please enter your security pass code" so that the user may enter the appropriate password. The get password subroutine is called in order to capture the password entered by the user and then a determination is made as to whether a null password has been entered. If a null password has been entered, then the appropriate security floor plan is redisplayed. If the password entered is a potential password, then the shunt zone command and the password are sent to the security/fire interface and the ''security response expected" ISIS event and the zone number for the ISIS event are stored. If a determination is made that the security system is unarmed, a shunt zone message is sent to the security/fire interface and the ISIS event is set to "security response expected" and the zone number for the ISIS event is stored. After storing the zone number for the ISIS event, whether the security system is armed or unarmed, the shunt zone subroutine then proceeds to set the Security/Fire Task to the "shunt zone" state and to inform the user to wait by both speaking the phrase "please wait for response form the security system" and by displaying "Please wait" as a message on the screen. The appropriate Master Screen Task substate is then set to prevent further user actions until the security communication is resolved, and the program then returns to the Polling Loop. The shunt zone substate is entered. Depending upon the result determined, the subroutine then branches to one of several possible results. A first possible result is "good", which means that the Security/Fire Interface acknowledged the shunt zone command. A "wait for bad password" ISIS event is then inserted, the substate is then set and the program returns to the Polling Loop. If the result at step is that no response has been received, the system is caused to speak the phrase "no response" and then redisplays the security floor plans. If the result is a bad password, then the "wait for bad password" ISIS event is removed and the phrase "invalid password" is spoken. The program then redisplays the security floor plans. If the result at is in an unexpected response, then the system speaks the phrase "unexpected response" and then redisplays the security floor plans. If the result at is that no bad password has been detected, the Security/Fire Task state is set to "zone map request" and a zone map request is sent to the Security/Fire Interface. The ISIS event is set to "security response expected" and then the Master Screen Task substate is set to prevent further user actions. The program then returns to the Polling Loop. If the result at is determined to be zone map returned, then a determination is made as to whether the current zone is now shunted. If the current zone is presently shunted, then the zone is shown as shunted on the screen and the system then decodes whether an armed or bypassed condition is present. If the current zone is not presently shunted, then the shunted indicator shown on the screen is erased and the program moves to the code the arm/bypass condition. The system then determines whether the security system is ready. If the security system is ready, then a determination is made as to whether the security system was ready before the shunting occurred. If the determination is made that the system was not ready before the shunting occurred, then the phrase "now ready" is spoken and the security floor plans subroutine continues to determine whether the security system is armed. If a determination is made that the security system is not ready, then a determination is made as to whether the security system was not ready before the shunting occurred. If the outcome of that determination is negative, then the phrase "now not ready" is spoken and the security floor plans subroutine then moves to a determination of whether the security system is armed. In the event that the security system was not ready before shunting or whether the security system was ready before shunting, the security floor plans subroutine then moves to determine whether the security system is armed. If the security system is armed, then a determination is made as to whether the security system was already armed before the shunting. If that determination is negative, then the phrase "now bypassed" is spoken and the system then determines whether the security system is armed. If the determination is that the security system is not armed, then a determination is made as to whether the security system was bypassed before the shunting occurred. If the outcome of that determination is negative, the phrase "now bypassed" is spoken and a determination is again made as to whether the security system is armed. If the outcome of the determination of whether the security system was bypassed before shunting is affirmative, the security floor plan subroutine then determines whether the security system is armed. If the security system is armed, then the current security floor plan is redisplayed. If the security system is not armed, then the "please wait" message displayed on the screen is erased, the zone marker is erased and the current highlighted box on the screen is blanked and the system then displays the security system status. The redormant or return to dormant screen routine of the Master Screen From the redormant screen, the redormant sub- routine turns off the voice response speakers and then determines whether an outgoing telephone call is active. If an outgoing telephone call is active, then the system hangs up the phone. If no outgoing telephone call is active or after the system hangs up the telephone, a determination is then made as to whether the video monitoring option is active. If the video monitoring function is active, then the redormant subroutine turns off the video monitoring. After turning off the video monitoring or if there is no active video monitoring, the redormant sub routine loads the dormant screen and then determines whether a telephone call is presently incoming. If the incoming telephone line is active, then the message "System is currently responding to a telephone call" is printed and the program returns to the top of the Polling Loop. If there is no incoming telephone call, then the Init recognition subroutine is called and the program then returns to the top of the Polling Loop. After the Reset Synchronisation routine is called, that subroutine resets the Master Screen V-state and then restores the previous touch coordinates to the new coordinates. The Reset Synchronisation routine then jumps to the master screen immediate response routine allowing the immediate response routine to act upon the most recent touch before the touch that got out of synchronisation. When it is determined that a new byte is in the Security/Fire Interface queue, the validation subroutine, depending upon its current V-state, branches to one of several places in the Validation subroutine. If the V-state is 0, then a determination is made as to whether the byte is equal to the first synchronisation byte. If it is not, then the V-state is reset. If the determination is affirmative, then the Validation subroutine increments the V-state and inserts a "complete security message" ISIS event. The subroutine then determines whether more Security/Fire Interface data is available. If there is no more Security/Fire Interface data available, then the subroutine returns to the top of the Polling Loop. If it is determined that more Security/Fire Interface data is available or if the current V-state is V state 1, then a determination is made as to whether the byte is equal to the second synchronisation byte. If it is not, then the V-state is reset. If the current byte is equal to the second synchronisation byte, then the subroutine increments the V-state and determines whether more Security/Fire Interface data is available. If there is no more Security/Fire Interface data available then the program returns to the top of the Polling Loop. In the event that there is more Security/Fire Inter- face data available at step or if the V-state upon receiving a new byte in the Security/Fire Interface queue is V-state 2, as determined, then a determination is made as to whether the incoming byte is a valid length for a message. If it is not, then the V-state is reset. If the new byte in the Security/Fire. Interface is of valid length, then the V-state is incremented by one and the length byte is stored. A determination is next made as to whether more Securi ty/Fire Interface data is available. If there is no more Security/Fire Interface data available, the program returns to the top of the Polling Loop. In the event that there is more Security/Fire Inter- face data available, or in the event that at the V state determination, the current state is V-state 3, a determination is made as to whether a valid command byte is present. If no valid command byte is present, the V-state is reset. If a valid command byte is present, a determination is made as to whether the length is greater than 1. If the outcome of that determination is affirmative, then the V-state is set at step to V-state 4. If the determination of the length is such that the length is less than 1, then the V-state is set to V-state 5. After the V-state is set to either V-state 4 or V-state 5, respectively, a determination is again made as to whether more Security/Fire Interface data is available. If no more Security/Fire Interface data is available, then the program returns to the top of the Polling Loop. If more Security/Fire Interface data is available or if the current V-state is determined to be V-state 4. then the data byte is stored and a determination is made as to whether all data has been received from the Security/Fire Interface. If all data has been received from the Security/Fire Interface, then the V-state is incremented by one and a decision as to whether more Security/Fire Inter- face data is pending is made. If it is determined that all data has not been received, then the Validation subroutine moves directly to determine whether more Security/Fire Interface data is available. If no more security/fire data is available, then the program returns to the top of the Polling Loop. If the answer to the query at step is that more Security/Fire Interface data is available, then the program stores the nth data byte and then goes to V-state 5.If the current state is V-state 5, then the program would branch directly to V-state 5. From V-state 5, the program then removes the "complete security message" from the ISIS and sets the V-state back to 0.
A determination is then made as to whether the check- sum byte is valid. If the checksum byte is not valid, then the program returns to the top of the Polling Loop. If the checksum byte is determined to be valid, then the program moves into the immediate response portion of the Security/Fire Interface subroutine and acknowledges the message received to the Security/Fire Interface. A determination is then made of the current state of the Security/Fire Interface, whether it is in the Neutral State, the "bypass system" or other Contextual States. If it is in the Neutral State, the program proceeds to determine whether the current command is an alarm. All commands that are not alarms are ignored and the program returns to the top of the Polling Loop. If the current command is determined to be an alarm then, depending upon the type of alarm, the program will branch to one of three places. If the command is a fire alarm, then the program branches to the Master Screen Task fire alarm. If the command is determined to be a security alarm, then the program branches to the Master Screen Task security alarm. If the command is not a fire alarm nor a security alarm, then the program sets the lighting mood preset for "all house lights" and then jumps to the top of the Polling Loop. If the state is to determined to be the "bypass system" state, then the program determines what type of command was received and branches to the appropriate point. If the command received in the bypass system state is an acknowledgment, then the program sets the result equal to good and then jumps to the Master Screen Task bypass security system substate. if the command determines that the system has already been bypassed, then the result is set to already bypassed and the program then jumps to the master screen bypass system substate. If the command is determined to be the rejection of the password inputted by the user, then the result is set to equal password rejected and the program then moves to another step. If the command is not one of the three commands already discussed, then the program goes to the Neutral State. The other Contextual States in addition to the bypass system state. If the state is determined to be a shunt zone state, then the program determines which command within the shunt zone state has been received and takes appropriate action. If an acknowledge command has been received, then the result is setto good. If the command step is determined to be a rejected password, then the result is set to equal rejected password. After the receipt of an acknowledge or rejected password (a rejected password is the same as a "bad password") command and the setting of the result, the program proceeds to the Master Screen Task shunt zone substate. For all other commands received, the program executes the Neutral State. If the state is determined to be the zone map request state, then a determination is made whether the command received is an acknowledge or rejected password or any other command. If the command received is an acknowledge command, then the result is set to good and the program returns to the routine requesting the zone map. If a rejected password command is received, then the result is set to equal rejected password and the program then returns to the routine requesting the zone map. For all other commands received, the program returns to its Neutral State. Lastly, if the Contextual State determination indicates that the current state is the arm system state, then a determination is made as to which command has been received. If an acknowledge command has been received, then the result is set to good and the program then goes to the master screen task arm/bypass security system substate. If the command received is the already armed command, then the result is set to equal already armed and the program then goes to the Master Screen Task arm/bypass security system substate. If the command is determined to be not ready, then the result is set equal to not ready and the program then proceeds. If the command is determined to be the password rejected command, then the result is set equal to rejected password and the program goes to the Master Screen Task arm/bypass security system substate. If the command is determined to be any other command than those previously discussed, then the program executes to the Neutral State. If the touch flag is equal to 1, a new "screen time-out" ISIS event is inserted, the touch flag is set to 0, and the subroutine returns to the top of the Polling Loop. If a determination is made that the touch flag is not equal to 1, then the Quit box on the touch screen is blinked five times. A determination is then made as to whether new screen data is available. If new screen data is available, a new "screen time-out" ISIS event is inserted and the subroutine returns to the Polling Loop. Inserting a new "screen time-out" ISIS event serves to maintain the present screen on the video monitor. If new screen data is not available, then the subroutine returns to the dormant screen. The Security Response Expected subroutine of the ISIS events system. After jumping to the security response expected ISIS event, the system determines whether this is the first time a security response expected ISIS event is being executed for that message. If it is the first time for execution of that event, the message is sent to the Security/Fire Interface again and a "security response expected" ISIS event is inserted. The subroutine then returns to the Polling Loop. If it is determined that the detected security response expected ISIS event is not being executed for the first time, then result is set to equal "no response" and, depending upon the current master screen state determination, the sub- routine jumps to the zone map request entry point, the arm/bypass security system substate entry point at or the shunt zone substate entry point. When a "wait for bad password" ISIS event is detected, the subroutine sets the result to "no bad password message" during delay and then sets the Security/Fire Task state to Neutral. A determination is then made of the current master screen state. Depending upon the current state, the system either then moves to the arm/bypass security system substate entry point or to the shunt zone substate entry point. Upon detecting a complete security message as an ISIS event, the system resets the security/fire V-state 1340 and then returns to the top of the Polling Loop. This is used to prevent incomplete messages from the Security/Fire Interface from disabling subsequent messages. FIG. 14 is a flow chart showing the operation of a Secondary Screen Task module. The Secondary Screen Tasks are empty shells which serve only to pipe their data to the Master Screen Task. Although not shown explicitly in the master touch- screen flow charts, the Master Screen Task pulls data from whatever queue corresponds to the current active screen. Thus, any secondary task may use the master screen routines by simply identifying itself as the current active screen and then jumping into the master screen validation routines. At the Secondary Screen Task, a determination is first made as to whether a new byte is present in the secondary screen queue. The secondary screen identifier is then stored to the variable and then the system jumps to the Master Screen Task validation routine.The following discussion describes the Arm/Bypass Security System algorithm, assuming no errors occur. After the setting of the security/fire state to either the "arm system" or "bypass system" states, respectively, the subroutine sets the "Security Response Expected" ISIS event. The Security/Fire Inter- face then sends an acknowledgment. The Security/Fire Task sets the result to "good" and then jumps to the arm/bypass security system entry point. Since the result is now set to "good", a "wait for bad password" ISIS event is inserted. The master screen substate is set to show a cursor but not to compare any touches with function boxes shown on the screen. The purpose for the "wait for bad password" ISIS event is to allow the Security/Fire Interface time to respond with a "bad password" message in the event that the password sent is invalid. In the event that the password sent was an invalid password, then the Security/Fire Interface will send a "bad password" message. If the Security/Fire Task receives a bad password message, then it will set the result equal to "bad password" and jump to the arm/bypass security system entry point. If the result is a "bad password", then the system is caused to speak the phrase "Security system reports an invalid password. Please try again". Once the "wait for bad password" ISIS event times out, the result will be set to "no bad password message during delay", the Security/Fire Interface state will be set to Neutral and the system will then jump to the arm/bypass security system entry point. Since the result is "no bad password message during delay", a zone map request is sent to the Security/Fire Interface and the substate is set to prevent touches from initiating any new action. Those steps are performed in order to determine the current armed or ready state of the security system. The Security/Fire Interface then sends the requested zone. The Security/Fire Task sets the result equal to "zone map returned" and jumps to the arm/bypass security system entry point. The zone map is returned. Depending upon the various combinations of current and previous armed and ready conditions of the security system, the system continues, and informs the user of any changes. The following describes the Enable/Disable Security Zone algorithm, assuming no errors occur. If, alternatively, the system is bypassed, then only the shunt zone command and zone number is sent to the Security/Fire Interface. The "Security Response Expected" ISIS event is set and the Security/Fire Interface sends an acknowledgment. The Security/Fire Task sets the result to good. That Task then jumps to the shunt zone substate entry point. Since the result is set to "good", a "wait for bad password" ISIS event is inserted and then the master screen substate is set to show a cursor but not compare user touches with the display boxes. The purpose of the "wait for bad password" ISIS event is to allow the Security/Fire Interface time to respond with a "bad password" message in the event that the password sent is invalid, or the security system was armed and no password was sent with the shunt zone command. If the password sent was an invalid password, then the Security/Fire Interface will send a "bad password" message. If the Security/Fire Task receives a bad password message, then it will set the result equal to "bad password" and jump to the shunt zone substate entry point. If the result is "bad password", then the system speaks "Security system reports an invalid password. Once the "wait for bad password" ISIS event times out, it will set the result equal to "no bad password message during delay" and then jump to the arm/bypass security entry point. Upon determining that a new data byte is present in the input/output queue, a determination is made of the current V-state to which the program then jumps. If the current V-state is V-state 0, then a determination is made as to whether the new data byte in the input/output queue is a valid command byte. If the current byte is a valid command byte, the command is stored and then a determination is made as to whether the stored command is an acknowledgment. If the stored command is an acknowledgment, then the value of 1 is stored to the variable length and then the system jumps to the Immediate Response routines. If the current byte is not a valid command byte, the system then returns to the top of the Polling Loop. If the command stored is not determined to be an acknowledgment, then the value of 4 is stored as the variable length and the "complete controller message" ISIS event is inserted. A determination is then made as to whether any more input/output data is available. If no more input/output data is available, then the system returns to the top of the Polling Loop. If more input/output data is available at or if the V-state is determined to be V-state 1, then the new data byte in the output queue is stored as data and then a determination is made as to whether all data has been received. If all data has not been received, then the system returns to the top of the Polling Loop. If all data has been received, then the V-state is reset and the "complete I/O controller message" ISIS event is removed. The system then jumps to the Immediate Response routines. Upon jumping to the immediate response routine, a determination is made, of the present command. If the present command is an acknowledgment, then the Immediate Response routine 20 does nothing and the system returns to the top of the Polling Loop. If the present command is an on-to-off or off-to-on transition report, then the port number is decoded and the new bit mask and old bit mask are stored. The two bit masks are then compared to determine which inputs caused the transition report. If the command is a digital status command, then the system decodes the port number, decodes the data bytes corresponding to the input board and then goes to determine the current state. Likewise, after the new and old bit masks are compared and a determination as to which inputs caused the transition report is made, a determination is then made of the current state. If the current state is the Neutral State, then a determination is made as to whether the current command is a transition report. If the current command is not a transition report, then all other commands are ignored and the system returns to the top of the Polling Loop. If the current command is a transition report, then for each bit in the bit mask, a determination is made as to whether the transition was an off-to-on transition. If it was not an off-to-on transition, then a determination is made as to whether the transition was an on-to-off transition. If the determination is that the transition was an off-to-on transition, then the subroutine Off-To-On is called. In a similar manner, if the determination is that the transition was an on-to-off transition, then the sub- routine On-To-Off is called. If the transition is not an on-to-off transition, then a determination is made as to whether all bits received were checked. If not, the program checks the next bit. If all of the bits have been checked, then the program returns to the Polling Loop. If the state is determined to be the read digital status state, then a determination is made as to whether the current command is a digital status command. If the current command is not a digital status command, then the program enters the Neutral State. If the current command is a digital status command, then the "I/O controller response expected" ISIS event is removed and the program jumps to the read digital inputs Support Environment routine. The read digital status Contextual State of the Input/Output Controller Task is shown. Once the Off-To-On routine has been called, the program jumps to the current port number. Depending upon whether the current port number is port N, port N + 1 or port N + 2, etc., the system then jumps to the appropriate port number and takes the appropriate action depending upon the number of the bit in the bit mask and then returns. As will be apparent in light of the above, the Neutral State of the input/output controller takes action based upon the number of the input and type of transition, either off-to-on or on-to-off that caused an interrupt. For example, a message consisting of the numbers 101, 14, 255, 127 would be decoded. As described above, the two-bit masks would be compared to determine that the 7th input (reading from 0 to 7) made an off-to-on transition and that the action assigned to that transition would be executed. Some transitions may be ignored, since they correspond to the return transition of a device. For example, a push-button being pressed would cause an off-to-on transition report, but when it is released, it would cause an on-to- off transition report. The system controller would take action on the off-to-on report, but not the on-to-off report. The following are some possible actions that would be taken based upon the detection of a transition: announce the presence of a car in the driveway, announce the presence of a person at the front door, announce that a bath is ready, and execute any action routines, for example, in response to a transition that was caused by a decorative touch switch. Such examples include lighting moods, bath controls, vacation mode, random lighting, controlling voice alerts and setting a distributed audio system to send music to preset areas of the home. THE DEVICE DRIVERS The Device Drivers perform the translation between the internal central processor commands and the commands necessary to achieve the desired function. Since there are many varieties of home subsystems, the central processor software has available a range of unique device drivers to support each type of a certain type of subsystem. For example, although several different lighting control systems are known, and each has its own unique device driver, all lighting mood device drivers are called in the same manner from within the central processor software In the future, any new types of a subsystem could be easily incorporated into the present system by developing a new standard Device Driver for that type of the subsystem. Some of the more complex device drivers include a change to the current state of the Task assigned to their device. Alternatively, they may place an event on the Internal Scheduled Interrupt Server. For instance, some Device Drivers expect an acknowledgment from a secondary processor that their command has been understood and executed That acknowledgment will come back over a serial line and be processed by a Task, rather than by a Device Driver. In that case, the Device Driver will place that device's task in a state where it is expecting that acknowledgment. It would also place an event on the ISIS that would notify it if the acknowledgment had not been received within a certain period of time. However, the present home automation system is designed to allow for additional device drivers to be added as desired. There are several components of the support environment, each of which will be discussed separately. They are: initialisation, graphics routines, error logging and response, task switching and controls, and user scheduled events.
The initialisation phase, as the name suggests, takes place when the home automation system is first run. At that time, various actions must be taken to ensure that the run-time portion of the software begins from a known state. Thus, all variables are initialised to values that reflect the most common expected state of the various subsystems. Interrupt and schedule queues are established to service the various devices. The devices on the AT bus are reset and are brought to their active state. The remote secondary processors are sent interrogatory commands in order to ascertain that they are operative. Additionally, any user-scheduled events subsequent to the current time and date are loaded into the ISIS. Depending upon the time of day, scheduled device driver routines may be activated, such as photo cell monitoring. At this point, the screen "redormant" substate is executed to begin the run time software. If any problems have occurred up to this point, the initialisation software will report and log the problem and then attempt to recover and continue the initialisation. Due to the modular nature of the system, and the fact that systems will be "rebooting" themselves subsequent to power failures, the software is designed to try to run a partially functioning system rather than to shut down completely. Another component of the support environment is the graphics routines. The displays (glass, monitor, mobile, tablet, etc) placed throughout the home environment provide the major means of communicating information to the user. A variety of standard routines are thus provided in order to control the usage of this screen. They consist of the following routines: load screen, load local image, highlight touch box and large font. The third component of the support environment is the error logging and response function. Within the system, there are two types of errors that may occur; Code errors and System errors. Code errors are those generated by the run-time portion of the BASIC compiler. Those errors are of the type such as "divide by 0". Usually, the central processor software will recover from such errors, but, occasionally, the system may crash due to a particularly unpleasant error. Code errors are not reported to the user, but are logged with the error number, time, date and current state of the master screen task. System errors, on the other hand, occur when a routine detects a problem. These errors result from conditions that are actively examined by a particular software routine. Examples of system errors are voltage readings out of a normal range, failure of a secondary processor to respond, or inability to open a data file. System errors are logged with the error number, a text description of the error that may include some variable values, time, date, and the current state of the master screen task. The central processor almost always recovers from system errors. System errors may or may not be reported to the user, at the option of the routine which detected the error. The fourth component of the support environment is the Task Switching and a Controls Subpart. As discussed, all tasks are structured in approximately the same manner. Usually, states and state transitions are selected and executed based upon data processed from that task's assigned device or devices. However, a task's current state may occasionally be set or altered by an external software module, such as a device driver expecting an acknowledgment over a serial line. ISIS events are also used to select and execute a state in a task under certain circumstances. Tasks may also select and execute a state within another task. Such external tasks-mandated state execution usually occurs between the master screen task and another task.
Due to the lack of multi-tasking capabilities in the operating system and the high-level language, there are no software controls to limit such nonstandard state selection and execution, as described above. Thus, the functionality of the nonstandard task-switching is checked through software validation and verification during software development. The final component of the support environment is the User-Scheduled Events Module. The present home automation system allows the user to schedule events to occur at a future date and/or time, such as setting a "vacation mode" to occur while one is away for two weeks. The list of scheduled events is maintained in a disk file which is loaded when the program is run, and which is reloaded in the first few seconds of each day. When the schedule file is loaded, the dates of the events in the file are compared with the current date. If the dates coincide, then an event is inserted in the ISIS queue according to the time and type of event in the schedule file. That line is then erased from the schedule file. OPERATION There are two phases to the operational sequence of the disclosed home automation system, the system initialisation and normal operation phases. When the home automation system is fully installed in a home and the power switch is then turned on, the system initialisation phase begins under the control of a batch file of operating system commands. Near the end of the initialisation phase, the home automation real-time software, HEART is loaded and control is then passed to that system. The HEART software performs the remaining initialisation tasks and then enters the normal operation phase. Each of the system initialisation and normal operation phases is described below in more detail. At power on, the system initialisation phase begins. The central processor executes its standard boot-up procedure and runs a predetermined auto-execute batch file designed to initialize the disclosed home automation system. In the preferred embodiment, this batch file is set up as a series commands in XML format. The main functions of this batch file are to configure the present home automation system hardware and to load data files into memory. The initial portion of the batch files serves to change hard disk directories where data files are located and copies files from those directories to the system extended memory or RAM. That is done so that the computer can load and use the data more rapidly than if it had to rely on hard disk access. After all the files have been copied, the auto-execute command runs the other initialisation software necessary to initialise system hardware. The last function of the auto-execute batch file is to load and run the HEART operating software. The HEART software begins by performing its own initialisation routines. Variables are initialised, the interrupt and scheduling queues are established, system devices are reset and made active, other processors in the home automation system are interrogated for proper operation and active schedules are loaded. After initialization, the HEART software executes the screen "redormant state". From that point on, the system is then in the normal operation phase. After the inventive home automation system has been initialised, the normal operation phase begins. In normal operation, the system usually appears to be "dormant" with no apparent actions being performed. However, the system software is busy in its Polling Loop checking all of the input queues for data. Based upon the data received at the input queues, the present home automation system will take appropriate action. In order to describe the operation of the disclosed home automation system in normal use, the following describes how the user would utilize the screen to control the security system. It should be understood, however, that controlling the security system is only one of the functions as described herein, but is believed to be representative of how the instant home automation system operates. The following example shows how, using screen commands, a user would disable a security zone, and-then arm his security system. In preferred installation, the user steps up to the window/display/opens his tablet or mobile phone and is displaying a "dormant screen" display, which can range from a blank screen to a logo to a custom graphic designed for the user. At this point, the Master Screen Task is in the Neutral State and the software is in the Polling Loop. To enter a command, the user touches the display screen anywhere on its surface. The user's touch causes the screen internal electronics 50 to begin sending touch position locations on a communications cable connected to a serial port on the system controller, namely through serial interface. The Validation routine interprets the data and enters the Immediate Response routine. While the user keeps his finger on the screen, a small dot is continually displayed beneath his finger. The cursor will follow the user's finger if he moves it around the screen. When the user lifts his finger from the screen for longer than 0.1 seconds, the disclosed home automation system will recognize that as a valid touch event on the dormant screen, speak an optional greeting and then initialize the main menu state. While in the main menu mode, the system displays a new graphic such as that shown in FIG. 3a, a menu of the general features that the system can perform. Themain menu screen task operates as a general Con- textual State, as shown in flow chart form in FIG. 3d. Unlike the dormant state, however, where, in this exam- pie, only one action was possible through the screen, the main menu state will perform different functions depending upon where the screen is touched. Once again, a display cursor "follows" the user's finger as he slides it about the screen, but now, when his finger moves across any touch active area or function box, that area is highlighted by changing its colour. If the user lifts his finger while an area is highlighted, then the software will recognize an input and perform the action appropriate for that location in the main menu state. In this example, the user touches the main menu function box titled Security Management. The home automation system responds by displaying a Security Management menu. The software then enters the Security Management state. The Security Management screen is a menu screen of the available security features. In this example, the user will disable a single security zone so that a window can be left open or openned, and then arm the security system using a floor plan screen display. To do this, the user touches the menu box labelled ENTRY LEVEL, since the zone to be disabled is on that floor. The menu box is highlighted, and when the user lifts his finger, the display changes to show a floor plan of the entry level of the home. After displaying the floor plan, the home automation system requests the status of all of the security zones on that level from the security interface. Next, the system displays the security status of each zone on the display. This display indicates with a coloured icon whether each zone is enabled, disabled or open. (The screen state is set to analyse floor plan touches in the middle). In order to disable the right living room window, the user first selects that window by touching it on the floor plan display. The system responds by highlighting the window with a coloured box, and describing the associated security zone and its status at the bottom of the screen. To disable the zone, the user then touches the white box labelled disable zone. The system responds by requesting the home security system to disable the zone and awaits confirmation of its request from the security system. When the zone disable request is accomplished and confirmed, the system modifies the zone icon and changes the status description on the screen. The next step in this example is for the user to arm the security system. To do that, the user touches the white box labelled ARM SYSTEM. The box is highlighted, and the display changes to a pass code displays. The next step is for the user to enter a security password code, one digit at a time, followed by the command OK. If the code is correct, the system sends an "arm" request to the security system. When confirmation is received, the Entry Level screen is again displayed with the green "ready to arm" indicator replaced by a red "armed" indicator. The last step in this example is for the user to communicate to the home automation system that he has completed his task and is through with the system. To do this, he touches the white box marked "Quit" which highlights and returns the system to its dormant screen and dormant state. Thus, the system jumps to the redormant routine. As has been described previously herein, the present inventive expandable home automation system can be utilised to control numerous environments. One such environment which can be controlled in a manner similar to that disclosed herein is a media or conference room The main menu of the display similar to that disclosed herein in connection with the expandable home automation system is shown. Equipment such as audio equipment, TV tuners, Media Players, lights and drapes, overhead projection and slide presentation equipment can be controlled merely by touching the appropriate function box on the screen menu. Other functions such as quitting, muting the sound, volume up and down, and lights on and off can also be provided. The audio sub-menu screen for the main menu screen. In addition to containing the functional control block contained in the main menu screen, the audio sub-menu screen allows the user to enable a plurality of high fidelity speakers. The TV tuner sub-menu screen which is selected by touching the TV tuner functional block. The TV tuner sub-menu screen allows the user to turn the TV on by selecting a particular channel, to turn the TV off, and to scan the channels using up and down function blocks. The sub menu also allows the user to pause, rewind and record live TV.
The media player (bluray, media server, VCR) can be actuated and placed in the play, pause, fast forward, rewind or stop mode, it can be turned off and an indication is provided as to whether the media has been loaded. The lights and drapes sub-menu screen which appears upon selecting the lights and drapes function from the main menu screen. Various predetermined settings can be chosen by selecting' scenes, which actuate the lights and drapes according to predetermined data in a manner similar to that discussed in connection with the lighting moods sub-menu screen. In addition, functional blocks are provided to actuate the drapes to change their state either from open to close or closed to open and to turn the lights off. The overhead projection sub-menu screen which appears on the monitor upon selecting the overhead projection functional box from the main menu screen. The touchscreen menu allows the user to turn on the overhead projector, turn the lights off and to move the projection screen up and down or the whole system can be programmed to do the whole process automatically. A slide presentation sub-menu screen which is selected upon selecting the slide presentation function from the main menu screen. A plurality of projectors may be actuated and the lights turned on and off using the functional blocks provided on this sub-menu screen. In addition, the selected projector can be focused in and out and caused to move in either a forward or reverse direction, using additional functional touch blocks provided on this sub-menu screen. Although many of the functions described herein operate to set parameters to those values preset in data files, it will of course be obvious to those of ordinary skill in the art that the present expandable home automation system can also be utilised to generate ramp signals with which to continuously vary the settings to a contemporaneous user determined value. Although only a preferred embodiment is specifically illustrated and described herein, it will be appreciated that many modifications and variations of the present invention are possible in light of the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.

Claims (20)

1. A system for automatically controlling a plurality of remotely controllable subsystems within a house, said system also being useful for automatically performing a plurality of commands input to the system by a user, said system comprising: a programmed data processor; at least one data interface means connected to said programmed data processor by means of a databus, said at least one data interface means being connected to said plurality of remotely controlla- ble subsystems for providing bidirectional direct communication between the subsystems and said programmed data processor; visual communication interface means comprised of a high resolution video monitor/Glass and associated screen interface, through which the system communicates information received from said sub- systems and also accepts commands from said user, said commands being communicated to said system merely by the user touching said screen inter- face means; and audio communication interface means comprised of a voice recognition and speech system, through which the system and user communicate with each other, said voice recognition and speech system functioning together with said visual communication interface to provide cues to said user as to available commands, feedback of whether the current command has been accepted by the system and the results of performing said current command.
2. The system of claim 1, further including a modem through which said system can communicate with external information retrieval data bases.
3. The system of claim 1, further including a plurality of types of user devices simultaneously connected to said programmed data processor for communicating commands to said system, said user devices comprising at least one of remote, tablet, computer, hand-held remote controls, computer keyboards, virtual keyboards, gestures, telephones and mobile phones.
4. The system of claim 1, wherein said plurality of data interface means includes translator means, connected between a commercially available automation controller and said programmed data processor, whereby said system can control said automation controller.
5. The system of claim 1, wherein said visual communication interface means provides the user with a floor plan display of at least one floor of said house, by which the user may enter certain commands into the system by touching the appropriate portion of said floor plan.
6. The system of claim 1, further including home appliances connected to said data interface means such that said system can control said home appliances.
7. The system of claim 1, wherein the user may schedule the occurrence of user-determined events by the 10 system by utilizing said visual communication interface means.
8. The system of claim 1, wherein said remotely controllable subsystems comprise at least one of home heating and cooling systems, access control systems, security systems, entry/exit and lighting systems.
9. The system of claim 1, wherein one of said plurality of remotely controllable subsystems comprises audio/video entertainment means which may be controlled by the user by operation of said visual communication 20interface means.
10. The system of claim 1, wherein said system is expandable to control a plurality of remotely controllable subsystems and home appliances by incorporating an unlimited number of data interface means.
11. A system for automatically controlling a plurality of remotely controllable subsystems within a house, said system also being useful for automatically performing a plurality of commands input to the system by a user, said system comprising: a programmed data processor; a plurality of data interface means connected to said programmed data processor by means of a databus, said plurality of data interface means being connected to said plurality of remotely controllable subsystems for providing bidirectional direct communication between the subsystems and said programmed data processor; visual communication interface means comprised of a high resolution video monitor and associated screen interface, through which the system communicates information received from said sub systems and also accepts commands from said user, said commands being communicated to said system merely by the user touching said screen inter-face means; and a plurality of additional communication interface means, including at least two of a tablet, laptop, computer, mobile phone, a gesture recognition, a voice recognition system, hand-held remote control, computer keyboard and telephone, said at least two additional communication interface means being simultaneously connected with each other and said visual communication interface means to said programmed data processor such that said user may communicate commands to said system using any of said connected communication interfaces.
12. The system of claim 11, wherein said system is expandable to control a plurality of remotely controllable subsystems and home appliances by incorporating an unlimited number of data interface means.
13. The system of claim 11, further including a modem through which said system can communicate with external information retrieval data bases.
14. The system of claim 11, wherein said plurality of data interface means includes translator means, connected between automation controllers and said programmed data processor, whereby said system can control said automation controller.
15. The system of claim 11, wherein said visual communication interface means provides the user with a floor plan display of at least one floor of said house, by which the user may enter certain commands into the system by touching the appropriate portion of said floor plan.
16. The system of claim 11, further including home appliances connected to said data interface means such that said system can control said home appliances.
17. The system of claim 11, wherein the user may schedule the occurrence of user-determined events by the system by utilizing said visual communication inter- face means.
18. The system of claim 11, wherein the user may schedule the occurrence of user-determined events by the system by utilizing said voice recognition system.
19. The system of claim 11, wherein said remotely controllable subsystems comprise at least one of home heating and cooling systems, access control systems, security systems and lighting systems.
20. The system of claim 11, wherein one of said plurality of remotely controllable subsystems comprises audio/video entertainment means which may be controlled by the user by operation of said visual communication interface means. END OF DOCUMENT
AU2013100081A 2012-10-22 2013-01-30 Home Environment Automated Real Time (HEART) System Ceased AU2013100081A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2013100081A AU2013100081A4 (en) 2012-10-22 2013-01-30 Home Environment Automated Real Time (HEART) System

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2012904648A AU2012904648A0 (en) 2012-10-22 Home Evolution System
AU2012904648 2012-10-22
AU2013100081A AU2013100081A4 (en) 2012-10-22 2013-01-30 Home Environment Automated Real Time (HEART) System

Publications (1)

Publication Number Publication Date
AU2013100081A4 true AU2013100081A4 (en) 2013-03-07

Family

ID=47790689

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2013100081A Ceased AU2013100081A4 (en) 2012-10-22 2013-01-30 Home Environment Automated Real Time (HEART) System

Country Status (1)

Country Link
AU (1) AU2013100081A4 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9485344B2 (en) 2014-12-01 2016-11-01 Honeywell International Inc. Personalizing interaction with a structure
US9568902B2 (en) 2013-03-15 2017-02-14 Vivint, Inc. Home security system with touch-sensitive control panel

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9568902B2 (en) 2013-03-15 2017-02-14 Vivint, Inc. Home security system with touch-sensitive control panel
US9485344B2 (en) 2014-12-01 2016-11-01 Honeywell International Inc. Personalizing interaction with a structure
US9979812B2 (en) 2014-12-01 2018-05-22 Honeywell International Inc. Personalizing interaction with a structure
US10498877B2 (en) 2014-12-01 2019-12-03 Ademco Inc. Personalizing interaction with a structure

Similar Documents

Publication Publication Date Title
US5086385A (en) Expandable home automation system
EP3070557B1 (en) Smart home scenario switching method and system
US6756998B1 (en) User interface and method for home automation system
US6792319B1 (en) Home automation system and method
US6909921B1 (en) Occupancy sensor and method for home automation system
CN101887342B (en) Intelligent terminal interactive interface display method and system
KR102231105B1 (en) control device and method for controlling the same
CN101833286A (en) Intelligent home controller
US20050267605A1 (en) Home entertainment, security, surveillance, and automation control system
US20140253483A1 (en) Wall-Mounted Multi-Touch Electronic Lighting- Control Device with Capability to Control Additional Networked Devices
WO2014190886A1 (en) Intelligent interaction system and software system thereof
KR20110097688A (en) Apparatus and method for assigning scenarios to command buttons
KR20110063516A (en) Touch-sensitive wireless device and on-screen display for remote control of the system
CN110543159B (en) Intelligent household control method, control equipment and storage medium
KR20140038959A (en) Method and apparatus for creating and modifying graphical schedules
US20070089725A1 (en) Multifunctional aspirating hood for household use
JP2002318843A (en) System, device, and method for remotely managing equipment, and storage medium
JPH04175921A (en) Home information board
EP2161878A1 (en) System for controlling electrically operated devices
CN103107925A (en) Digital family control system and method thereof
JP7033724B2 (en) Information terminal and operation support program
JP2002315069A (en) Remote controller
AU2013100081A4 (en) Home Environment Automated Real Time (HEART) System
CN112259096B (en) Voice data processing method and device
CN112562667A (en) Storage medium, voice response apparatus and method

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry