US20070033033A1 - Dictate section data - Google Patents
Dictate section data Download PDFInfo
- Publication number
- US20070033033A1 US20070033033A1 US11/498,956 US49895606A US2007033033A1 US 20070033033 A1 US20070033033 A1 US 20070033033A1 US 49895606 A US49895606 A US 49895606A US 2007033033 A1 US2007033033 A1 US 2007033033A1
- Authority
- US
- United States
- Prior art keywords
- report
- dictation
- section
- user
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
Definitions
- the present disclosure relates to methods and systems for dictation into a report and, more particularly, to methods and systems for dictation into user-selected portions of sections of a report.
- the documented information can be in many different forms, such as a patient record database with patient demographic information and clinical data, an engineer's report on the structural conditions of a building, or an invoice including professional fees, travel and other expenses related to the services performed.
- the professional memorializes the pertinent data or basis for a decision contemporaneously as services are performed, such as by handwritten notes or dictation into a voice recorder, and the information is subsequently gathered for office personnel to enter into a report.
- Many reports are standardized as forms and the gathered information is filled into the form for efficient reporting.
- clinical information is developed during discussions with and physical examination of the patient.
- the physician dictates or writes the clinical information observed during the examination, and the forms and notes are typically entered by the physician's office personnel.
- the structural engineer dictates or writes his observations during a visual inspection, and a building inspection report is generated by filling in a form-like report with standard pre-filled text on general building condition, supplemented by contemporaneous information using the recorded dictation.
- the results of the physical examination or clinical information are routinely recorded such as by the physician entering the information onto a form which is then placed in the patient's history file. It is common practice for the healthcare professional to make handwritten notes during the patient's physical examination. The notes are later used by the healthcare professional for personally dictating a patient's report. The dictation is then transcribed, reviewed and signed by the physician who conducted the patient's physical examination.
- HMO health maintenance organization
- the Standards for Privacy of Individually Identifiable Health Information (“Privacy Rule”) limits the circumstances in which an individual's protected health information may be used or disclosed.
- the Privacy Rule which was published in final form on Aug. 14, 2002, establishes, for the first time, a set of national standards for the protection of certain health information.
- the U.S. Department of Health and Human Services issued the Privacy Rule to implement the requirement of the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
- a major purpose of the Privacy Rule is to define and limit the circumstances in which an individual's protected health information may be used or disclosed.
- the Privacy Rule requires that health plans, healthcare clearinghouses, and every healthcare provider, regardless of size, who electronically transmits health information shall maintain reasonable and appropriate administrative, technical and physical safeguards to ensure the integrity and confidentiality of the information, protect against any reasonably anticipated threats or hazards to the security or integrity of the information, and protect against unauthorized uses or disclosures or the information.
- a computer-based method for dictating into a report section via an electronic network.
- the method includes: marking a first insertion point in a report section at a first user-selected position, the first user-selected position being selectable to allow a user to record a dictation at any position in the report section, the dictation comprising electronic audio signals; and recording a first dictation into the report section at the first insertion point, the first dictation comprising electronic audio signals.
- a computer-based method for dictating into a report in a flexible manner.
- the method includes generating an instance of a report, wherein the report includes at least one of a report header, report part, report section or section data; setting an indicator to indicate a mode of dictation; and recording a dictation into the report, the dictation comprising electronic audio signals.
- a system for allowing a user to selectively dictate into a report.
- the system comprises: an instructable data processor for generating an instance of a report, the report comprising a report header, at least one report part, and at least one report section; a display device operatively associated with the instructable data processor for displaying the instance of the report; at least one of a mouse, a keyboard, or an assisted device operatively associated with the display device for marking an insertion point in a report section at a user-selected position; and a microphone operatively associated with the instructable data processor to record a dictation into the report section at the insertion point, wherein the dictation comprises electronic audio signals.
- FIG. 1 illustrates a computer network suitable for use in accordance with an exemplary embodiment of the present invention.
- FIG. 2 illustrates a report structure showing a medical report header, part, section and section data, according to an exemplary embodiment of the present invention.
- FIGS. 3A to 3 D illustrate a system for allowing a user to selectively dictate into the report structure of FIG. 2 , according to an exemplary embodiment of the present invention.
- FIG. 4 is a flowchart showing a method of dictating into a report section, according to an exemplary embodiment of the present invention.
- FIG. 5 is a flowchart showing a method of dictating into a report in a flexible manner, according to an exemplary embodiment of the present invention.
- FIG. 6 shows a graphical user interface including a virtual tape recorder, according to an exemplary embodiment of the present invention.
- FIG. 1 illustrates a computer network suitable for use in accordance with an exemplary embodiment of the present invention. It should be understood that the elements shown in FIG. 1 may be implemented in various forms of hardware, software or combinations thereof.
- a computer network 100 includes clinical database 190 , general database 180 , application service 170 , secure Internet server 160 , and at least one Internet access device, such as for example, workstation 115 , PDA 120 , laptop (or notebook) computer 125 , or other Microsoft Windows-enabled mobile devices 130 .
- the computer network 100 also includes a software subsystem called the “universal address book”, which maintains information on all entities in the system.
- universal address book (UNAB) data 185 is contained in the general database 180 .
- the information such as person identifying and demographic information stored in the general database 180 may be separated from the data that is stored in the clinical database 190 .
- the general database 180 is designed to be run either integrated with the other database tables or hosted as a separate database system in a geographically different site from the clinical database 190 .
- a hacker who succeeds at hacking into one site will not obtain the other's information. For example, if a hacker were to hack into the clinical database 190 the hacker would not have the person identifying information.
- the demographic data and/or clinical data may be encrypted.
- the connection between the UNAB data 185 and the clinical database 190 may also be encrypted.
- the secure Internet server 160 includes modules to facilitate access to/from application service 170 to the various Internet access devices 115 , 120 , 125 and 130 connected to the network over the Internet 150 .
- the workstations 115 may be a fixed or portable personal computer equipped with a computer monitor or screen, a keyboard, a microphone and/or a camera, software modules for browsing hypertext or hypermedia pages, a set of computer speakers and a computer mouse.
- data or information can be input into the secure Internet server 160 from the various Internet access devices 115 , 120 , 125 and 130 over the Internet 150 without software specially made for the secure Internet server 160 .
- Specific software that may be needed from time to time can be downloaded from the secure Internet server 160 and installed at the various Internet access devices 115 , 120 , 125 and 130 .
- security software for user identification or authentication can be loaded at the user's station and used to ensure the user is a registered subscriber.
- Commercially available software such as VoiceID or Pronexus VBVoice can be used for the speaker identification process.
- Secure Internet server 160 may include an instructable data processor which can be coupled to a hard disk, a keyboard, mouse, and/or another form of user interface (e.g., microphone) as well as to a video card and display device, a network interface card, telephony cards and circuits, and random access memory (RAM), where the latter alone or in combination with the hard disk may contain system software which provides instruction signals for instructing the data processor and/or other instructable data processor to carry out machine-implemented operations in accordance with the present disclosure.
- an instructable data processor which can be coupled to a hard disk, a keyboard, mouse, and/or another form of user interface (e.g., microphone) as well as to a video card and display device, a network interface card, telephony cards and circuits, and random access memory (RAM), where the latter alone or in combination with the hard disk may contain system software which provides instruction signals for instructing the data processor and/or other instructable data processor to carry out machine-implemented operations in accordance with the present disclosure.
- RAM random
- dictation refers to electronic audio signals, such as for example, electronic audio signals representing user input speech.
- dictation is classified into four types which determine the processing to be done to that voice recording.
- the first type is a “standard dictation” which is a voice recording intended to be transcribed. When the transcription is completed, the voice recording does not need to be saved and after an allotted time would be archived.
- the second type is called a “permanent dictation” which is a dictation that is intended to be transcribed, but after transcription the original voice recording needs to remain—it does not get archived.
- the third type is an “annotation”.
- An annotation is a dictation that is not intended to be transcribed, and it will be saved as a voice recording.
- the fourth type is a “transcribe-on-demand dictation”.
- a transcribe-on-demand dictation is an annotation in the sense that it is not intended to be transcribed, but subsequent events may require that voice recordings be transcribed for the purposes of that event, such as an allegation of medical malpractice.
- Based on a user indication those voice recordings that were marked as transcribe-on-demand can be transcribed.
- the user indication may include parameters such as name of patient, date(s), etc. For example, the user may indicate to transcribe all recordings for Mary Smith.
- FIG. 2 illustrates a report structure including a medical report header, part, section and section data, according to an exemplary embodiment of the present invention.
- various types of reports are suitable for use in accordance with embodiments of the present invention, including but not limited to medical reports, client reports, engineering reports, research reports, tax reports, accounting reports, accident reports, inventory reports, business reports, insurance reports, financial reports, government reports, or documentation reports, etc.
- the report structure 200 illustrates an exemplary medical report that includes report header 210 , report part 220 , report section 230 and report section data 240 .
- a report in accordance with an exemplary embodiment of the present invention, includes a report header 210 , at least one report part 220 , and at least one report section 230 .
- the report header 210 may contain metadata relating to the entire report, such as for example, person, patient, client, author, entity such as a government entity, physician, engineer, attorney, date, file identifier such as project name, attorney docket number or security classification level, location and so forth.
- each report section 230 contains clinical data that documents the information relevant to that section.
- the data for the cardiac exam section may contain information about an EKG, resting pulse, etc.
- the section data can include data that was entered by various means, such as text entry and dictation(s).
- FIGS. 3A to 3 D illustrate a system for allowing a user to selectively dictate into the report structure of FIG. 2 , according to an exemplary embodiment of the present invention.
- a user may begin dictation of an entire report when the user begins recording in a blank report after clicking on a report header.
- information about the location and/or other attributes of the voice recording is stored in the clinical database 190 , and a placeholder such as the tape icon having a text label “ 311 ” shown in FIG. 3A is inserted within the report header 210 .
- the tape icon labeled 311 contains a link shown by an arrow in FIG. 3A to the dictation entity (box labeled “Dictation 311 ”) in the clinical database 190 .
- Playback can be initiated, for example, by double-clicking on the tape icon labeled 311 within the report header 210 of FIG. 3A , which causes the voice recording to play back through the link in the clinical database 190 . Playback also can be initiated by activating a virtual tape recorder.
- FIG. 6 shows a graphical user interface including a virtual tape recorder, according to an exemplary embodiment of the present invention. Referring to FIG. 6 , the virtual tape recorder 600 includes stop button 610 , record button 620 and play button 630 .
- the virtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking on play button 630 on the virtual tape recorder 600 , activating a function key and/or activating a hardware device associated with a function key.
- a user may begin dictation of a part of a report when the user begins recording after clicking on a report part.
- information about the location and/or other attributes of the voice recording is stored in the clinical database 190 , and a placeholder such as the tape icon having a text label “ 322 ” shown in FIG. 3B is inserted within the report part 220 .
- the tape icon labeled 322 contains a link shown by an arrow in FIG. 3B to the dictation entity (box labeled “Dictation 322 ”) in the clinical database 190 .
- a user may begin dictation of an entire section of a report when the user begins recording after clicking on a report section.
- information about the location and/or other attributes of the voice recording is stored in the clinical database 190 , and a placeholder such as the tape icon having a text label “ 331 ” shown in FIG. 3C is inserted within the report section 230 .
- the tape icon labeled 331 contains a link shown by an arrow in FIG. 3C to the dictation entity (box labeled “Dictation 331 ”) in the clinical database 190 .
- a user may begin dictation of a portion of a report section when the user begins a recording after marking an insertion point at a user-selected position in the report section.
- FIG. 4 is a flowchart showing a method of dictating into a report section via an electronic network, according to an exemplary embodiment of the present invention.
- the electronic network may be the Internet.
- a step 410 mark a first insertion point in a report section at a first user-selected position, wherein the first user-selected position is selectable to allow a user to record a dictation (comprising electronic audio signals) at any position in the report section.
- the user may only need to annotate existing information.
- the first insertion point may be marked in a blank report section or in a report section containing data.
- Report sections containing data include new report sections into which data has been entered and previously saved report sections containing data.
- data refers to text, image, voice, multi-media, video, electronic file and/or report data.
- a method of dictating into a report section via an electronic network includes generating an instance of a report including a report section, before marking a first insertion point in the step 410 .
- Marking the first insertion point in the report section at the first user-selected position may comprise positioning a cursor at the first user-selected position in the report section.
- the cursor may be positioned using a mouse, a keyboard, and/or an assisted device.
- a step 420 record a first dictation into the report section at the first insertion point.
- recording the first dictation into the report section at the first insertion point comprises activating a virtual tape recorder 600 .
- the virtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking on record button 620 on the virtual tape recorder 600 , activating a function key and/or activating a hardware device associated with a function key.
- the virtual tape recorder 600 may be stopped, for example, using a mouse, using a keyboard, using an assisted device, clicking on stop button 610 on the virtual tape recorder 600 , activating a function key and/or activating a hardware device associated with a function key.
- a text label, icon, tape icon, or tape icon having a text label representing the first dictation may be inserted at the first insertion point. It is to be understood that the text label, icon, tape icon, or tape icon having a text label may be inserted at the first insertion point at any time, e.g., prior to when the virtual tape recorder 600 is stopped.
- An icon or text label may include an indication of a dictation type, e.g., a standard dictation type, a permanent dictation type, an annotation type, or a transcribe-on-demand dictation type.
- a relative position of the first dictation in the report section may be maintained when data is added to the report section or when data in the report section is edited, modified or deleted.
- the relative position of the first dictation is maintained relative to neighboring data in the report section.
- a method of dictating into a report section includes marking a second insertion point in the report section at a second user-selected position, wherein the second user-selected position is selectable to allow the user to record a dictation at any position in the report section.
- a second dictation may be recorded into the report section at the second insertion point.
- the relative position of the second dictation is maintained relative to neighboring data in the report section.
- a secure transcription capability comprises the steps of allowing a user to securely access the first dictation and preventing the user from accessing other data in the report section; and inserting a transcription of the first dictation at the relative position of the first dictation.
- the transcription of the first dictation may be visually distinct from other data in the report section.
- the transcribed text may be a different color than that of neighboring text. Visual cues allow the user to quickly and easily identify changes to a report. This can be useful, for example, in situations where a person such as a physician is required to review and/or approve changes in a report.
- FIG. 5 is a flowchart showing a method of dictating into a report in a flexible manner, according to an exemplary embodiment of the present invention.
- a step 510 generate an instance of a report, wherein the report includes at least one of a report header, report part, report section or section data.
- a step 520 set an indicator to indicate a mode of dictation.
- the step 520 may comprise setting an indicator to indicate either a dictation of a report, a dictation of a report section, or a dictation of a portion of a report section.
- setting an indicator to indicate the mode of dictation comprises: setting an indicator to indicate dictation of a report, when a user begins a recording in a blank report after clicking on a report header; setting the indicator to indicate dictation of an entire section, when a user begins a recording after clicking on a report section header; and setting the indicator to indicate dictation of a portion of a report section, when a user begins a recording after marking an insertion point at a user-selected position in the report section.
- voice files that can be recorded are really annotations, defined as voice files that will not be transcribed but will remain as voice files. For example, these can be specified by selecting the annotations indicator for the virtual tape recorder 600 when a dictation is open, and this information will be maintained in the metadata for the voice file.
- Marking the insertion point at the user-selected position in the report section may comprise positioning a cursor at the user-selected position in the report section.
- the cursor may be positioned using a mouse, a keyboard, and/or an assisted device.
- recording the dictation into the report comprises activating a virtual tape recorder 600 to record a dictation into the report.
- the virtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking on record button 620 on the virtual tape recorder 600 , activating a function key and/or activating a hardware device associated with a function key.
- a relative position of the dictation in the report is maintained when data is added to the report or when data in the report is edited, modified or deleted.
- the relative position of the dictation may be maintained relative to neighboring data in the report.
- FIG. 1 illustrates a computer network suitable for use in accordance with an exemplary embodiment of the present invention.
- the application service 170 supports the ability for a user to dictate an entire report, a part thereof, a section thereof, or portion of a section thereof, via an electronic network.
- the electronic network may be the Internet.
- a system for allowing a user to selectively dictate into a report via an electronic network includes an instructable data processor for generating an instance of a report.
- the report may include a report header 210 , at least one report part 220 and at least one report section 230 , as shown in FIG. 2 .
- the report may further include report section data 240 .
- a system for allowing a user to selectively dictate into a report via an electronic network further includes a display device operatively associated with the instructable data processor for displaying the instance of the report; at least one of a mouse, a keyboard, or an assisted device operatively associated with the display device for marking an insertion point in a report section at a user-selected position; and a microphone operatively associated with the instructable data processor to record a dictation (comprising electronic audio signals) into the report section at the insertion point.
- FIGS. 1 and 3 D an exemplary scenario wherein a user dictates into a medical report section, in accordance with an exemplary embodiment of the present invention, will be described with reference to FIGS. 1 and 3 D.
- the user marks an insertion point using, e.g., a mouse (clicking into the text sets an insertion point) and then indicates that s/he wishes to record by clicking on the record button 620 of the virtual tape recorder 600 .
- This action causes the application service 170 to record the user's voice and save it as a voice recording file 301 on a secure file system.
- Information about the location and/or other attributes of the voice recording are stored in the clinical database 190 , and a placeholder such as tape icon having a text label “ 341 ” is inserted within the medical report section data 240 . This is also the point at which transcribed text would be inserted during the transcription process.
- the placeholder or tape icon labeled 341 contains a link shown by an arrow in FIG. 3D to the dictation entity, i.e., box labeled “Dictation 341 ”, in the clinical database 190 .
- Playback can be initiated in a variety of ways, such as for example, by double-clicking on the tape icon labeled 341 within the medical report section data 240 of FIG. 3D , which causes the voice recording to play back through the link in the clinical database 190 .
- Playback can also be initiated by activating the virtual tape recorder 600 .
- the virtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking on play button 630 on the virtual tape recorder 600 , activating a function key and/or activating a hardware device associated with a function key.
- a system for allowing a user to selectively dictate into a report via an electronic network allows the user to dictate only the amount of data necessary to complete the documentation of e.g., a section of a medical report, and to place the voice recording at exactly the location in the section where it is appropriate.
- a system for allowing a user to selectively dictate into a report via an electronic network eliminates the necessity to record an entire section (that may already have some relevant data) or an entire report, and permits the clinical documentation process to be performed by multiple individuals if needed.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Epidemiology (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Acoustics & Sound (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A computer-based method is provided for dictating into a report section via an electronic network. The method includes: marking a first insertion point in a report section at a first user-selected position, the first user-selected position being selectable to allow a user to record a dictation at any position in the report section, the dictation comprising electronic audio signals; and recording a first dictation into the report section at the first insertion point, the first dictation comprising electronic audio signals.
Description
- This is a Continuation-In-Part Application of U.S. application Ser. No. 11/083,865 (Attorney Docket No. 8123-1), filed Mar. 18, 2005 and entitled “SYSTEM AND METHOD FOR REMOTELY INPUTTING AND RETRIEVING RECORDS AND GENERATING REPORTS,” the content of which is herein incorporated by reference in its entirety.
- 1. Technical Field
- The present disclosure relates to methods and systems for dictation into a report and, more particularly, to methods and systems for dictation into user-selected portions of sections of a report.
- 2. Discussion of Related Art
- It has been the practice of professionals, such as doctors, lawyers, and engineers to personally record pertinent information on a subject patient, client or matter so that professional services performed and data pertinent to the subject are documented. The documented information can be in many different forms, such as a patient record database with patient demographic information and clinical data, an engineer's report on the structural conditions of a building, or an invoice including professional fees, travel and other expenses related to the services performed.
- In many instances, the professional memorializes the pertinent data or basis for a decision contemporaneously as services are performed, such as by handwritten notes or dictation into a voice recorder, and the information is subsequently gathered for office personnel to enter into a report. Many reports are standardized as forms and the gathered information is filled into the form for efficient reporting. For example, in the case of a physician examining a patient, clinical information is developed during discussions with and physical examination of the patient. The physician dictates or writes the clinical information observed during the examination, and the forms and notes are typically entered by the physician's office personnel. Likewise, the structural engineer dictates or writes his observations during a visual inspection, and a building inspection report is generated by filling in a form-like report with standard pre-filled text on general building condition, supplemented by contemporaneous information using the recorded dictation.
- When a patient is examined by a physician, the results of the physical examination or clinical information are routinely recorded such as by the physician entering the information onto a form which is then placed in the patient's history file. It is common practice for the healthcare professional to make handwritten notes during the patient's physical examination. The notes are later used by the healthcare professional for personally dictating a patient's report. The dictation is then transcribed, reviewed and signed by the physician who conducted the patient's physical examination.
- In the case of medical offices operating under health maintenance organization (HMO) oversight, requiring audits of the examination notes of medical professionals for consistency and trends in diagnosis and treatment, the lack of computerized databases for monitoring and updating clinical examination data and the time consuming process of re-transcribing and editing paper charts complicates this auditing process.
- The Standards for Privacy of Individually Identifiable Health Information (“Privacy Rule”) limits the circumstances in which an individual's protected health information may be used or disclosed. The Privacy Rule, which was published in final form on Aug. 14, 2002, establishes, for the first time, a set of national standards for the protection of certain health information. The U.S. Department of Health and Human Services issued the Privacy Rule to implement the requirement of the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
- A major purpose of the Privacy Rule is to define and limit the circumstances in which an individual's protected health information may be used or disclosed. The Privacy Rule requires that health plans, healthcare clearinghouses, and every healthcare provider, regardless of size, who electronically transmits health information shall maintain reasonable and appropriate administrative, technical and physical safeguards to ensure the integrity and confidentiality of the information, protect against any reasonably anticipated threats or hazards to the security or integrity of the information, and protect against unauthorized uses or disclosures or the information.
- Medical record documentation systems that allow the use of dictation as a means of documenting clinical data are known. Prior-art systems require that an entire report be dictated (i.e., one dictation constitutes all the sections in the report) or that an entire section of a report be dictated (i.e. one dictation constitutes the entire section). Situations arise where a section of a medical report contains a nearly complete record of the clinical findings. In existing systems, it can be necessary to repeat information that has been previously documented. This is time-consuming and gives rise to an undesirable opportunity to introduce inaccuracies in the repeated information. Dictation of previously documented information also increases costs when the dictation requires transcription.
- Despite the advances in the art, there is a need for a system and method for facilitating interactive dictation of data and records and to automatically generate reports. A need exists for a system and method of managing medical records for concurrently recording patient history and/or examination notes during patient examination. A need exists for a system and methods of managing records in which the user may document the information by voice dictation over a global electronic network such that the electronic records are private and secure. A need also exists for a system and method of allowing a user to dictate only the additional data needed in updating a record or report.
- According to an exemplary embodiment of the present invention, a computer-based method is provided for dictating into a report section via an electronic network. The method includes: marking a first insertion point in a report section at a first user-selected position, the first user-selected position being selectable to allow a user to record a dictation at any position in the report section, the dictation comprising electronic audio signals; and recording a first dictation into the report section at the first insertion point, the first dictation comprising electronic audio signals.
- According to an exemplary embodiment of the present invention, a computer-based method is provided for dictating into a report in a flexible manner. The method includes generating an instance of a report, wherein the report includes at least one of a report header, report part, report section or section data; setting an indicator to indicate a mode of dictation; and recording a dictation into the report, the dictation comprising electronic audio signals.
- According to an exemplary embodiment of the present invention, a system is provided for allowing a user to selectively dictate into a report. The system comprises: an instructable data processor for generating an instance of a report, the report comprising a report header, at least one report part, and at least one report section; a display device operatively associated with the instructable data processor for displaying the instance of the report; at least one of a mouse, a keyboard, or an assisted device operatively associated with the display device for marking an insertion point in a report section at a user-selected position; and a microphone operatively associated with the instructable data processor to record a dictation into the report section at the insertion point, wherein the dictation comprises electronic audio signals.
- The present invention will become more apparent to those of ordinary skill in the art when descriptions of exemplary embodiments thereof are read with reference to the accompanying drawings.
-
FIG. 1 illustrates a computer network suitable for use in accordance with an exemplary embodiment of the present invention. -
FIG. 2 illustrates a report structure showing a medical report header, part, section and section data, according to an exemplary embodiment of the present invention. -
FIGS. 3A to 3D illustrate a system for allowing a user to selectively dictate into the report structure ofFIG. 2 , according to an exemplary embodiment of the present invention. -
FIG. 4 is a flowchart showing a method of dictating into a report section, according to an exemplary embodiment of the present invention. -
FIG. 5 is a flowchart showing a method of dictating into a report in a flexible manner, according to an exemplary embodiment of the present invention. -
FIG. 6 shows a graphical user interface including a virtual tape recorder, according to an exemplary embodiment of the present invention. - Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Like reference numerals refer to similar of identical elements throughout the description of the figures.
-
FIG. 1 illustrates a computer network suitable for use in accordance with an exemplary embodiment of the present invention. It should be understood that the elements shown inFIG. 1 may be implemented in various forms of hardware, software or combinations thereof. - Referring to
FIG. 1 , acomputer network 100 includesclinical database 190,general database 180,application service 170,secure Internet server 160, and at least one Internet access device, such as for example,workstation 115,PDA 120, laptop (or notebook)computer 125, or other Microsoft Windows-enabledmobile devices 130. Thecomputer network 100 also includes a software subsystem called the “universal address book”, which maintains information on all entities in the system. In an exemplary embodiment of the present invention, universal address book (UNAB)data 185 is contained in thegeneral database 180. - The information such as person identifying and demographic information stored in the
general database 180 may be separated from the data that is stored in theclinical database 190. Thegeneral database 180 is designed to be run either integrated with the other database tables or hosted as a separate database system in a geographically different site from theclinical database 190. When hosted separately, a hacker who succeeds at hacking into one site will not obtain the other's information. For example, if a hacker were to hack into theclinical database 190 the hacker would not have the person identifying information. To increase security and privacy, the demographic data and/or clinical data may be encrypted. The connection between theUNAB data 185 and theclinical database 190 may also be encrypted. - The
secure Internet server 160 includes modules to facilitate access to/fromapplication service 170 to the various 115, 120, 125 and 130 connected to the network over theInternet access devices Internet 150. Theworkstations 115, for example, may be a fixed or portable personal computer equipped with a computer monitor or screen, a keyboard, a microphone and/or a camera, software modules for browsing hypertext or hypermedia pages, a set of computer speakers and a computer mouse. - Generally, data or information can be input into the
secure Internet server 160 from the various 115, 120, 125 and 130 over theInternet access devices Internet 150 without software specially made for thesecure Internet server 160. Specific software that may be needed from time to time can be downloaded from thesecure Internet server 160 and installed at the various 115, 120, 125 and 130. For example, security software for user identification or authentication can be loaded at the user's station and used to ensure the user is a registered subscriber. Commercially available software such as VoiceID or Pronexus VBVoice can be used for the speaker identification process.Internet access devices -
Secure Internet server 160 may include an instructable data processor which can be coupled to a hard disk, a keyboard, mouse, and/or another form of user interface (e.g., microphone) as well as to a video card and display device, a network interface card, telephony cards and circuits, and random access memory (RAM), where the latter alone or in combination with the hard disk may contain system software which provides instruction signals for instructing the data processor and/or other instructable data processor to carry out machine-implemented operations in accordance with the present disclosure. - As used herein, the term “dictation” refers to electronic audio signals, such as for example, electronic audio signals representing user input speech. In accordance with an exemplary embodiment of the present invention, dictation is classified into four types which determine the processing to be done to that voice recording. The first type is a “standard dictation” which is a voice recording intended to be transcribed. When the transcription is completed, the voice recording does not need to be saved and after an allotted time would be archived. The second type is called a “permanent dictation” which is a dictation that is intended to be transcribed, but after transcription the original voice recording needs to remain—it does not get archived. The third type is an “annotation”. An annotation is a dictation that is not intended to be transcribed, and it will be saved as a voice recording. The fourth type is a “transcribe-on-demand dictation”. A transcribe-on-demand dictation is an annotation in the sense that it is not intended to be transcribed, but subsequent events may require that voice recordings be transcribed for the purposes of that event, such as an allegation of medical malpractice. Based on a user indication, those voice recordings that were marked as transcribe-on-demand can be transcribed. The user indication may include parameters such as name of patient, date(s), etc. For example, the user may indicate to transcribe all recordings for Mary Smith.
-
FIG. 2 illustrates a report structure including a medical report header, part, section and section data, according to an exemplary embodiment of the present invention. It will be appreciated that various types of reports are suitable for use in accordance with embodiments of the present invention, including but not limited to medical reports, client reports, engineering reports, research reports, tax reports, accounting reports, accident reports, inventory reports, business reports, insurance reports, financial reports, government reports, or documentation reports, etc. Referring toFIG. 2 , thereport structure 200 illustrates an exemplary medical report that includesreport header 210,report part 220,report section 230 andreport section data 240. - A report, in accordance with an exemplary embodiment of the present invention, includes a
report header 210, at least onereport part 220, and at least onereport section 230. Thereport header 210 may contain metadata relating to the entire report, such as for example, person, patient, client, author, entity such as a government entity, physician, engineer, attorney, date, file identifier such as project name, attorney docket number or security classification level, location and so forth. - In the case of a medical report, for example, there may be a
report part 220 entitled physical exam. The physical exam part may contain sections for the chest exam, cardiac exam, abdominal exam, etc. Areport section 230 may containreport section data 240. For example, in a medical report, eachreport section 230 contains clinical data that documents the information relevant to that section. For example, the data for the cardiac exam section may contain information about an EKG, resting pulse, etc. The section data can include data that was entered by various means, such as text entry and dictation(s). -
FIGS. 3A to 3D illustrate a system for allowing a user to selectively dictate into the report structure ofFIG. 2 , according to an exemplary embodiment of the present invention. For example, a user may begin dictation of an entire report when the user begins recording in a blank report after clicking on a report header. Referring toFIG. 3A , information about the location and/or other attributes of the voice recording is stored in theclinical database 190, and a placeholder such as the tape icon having a text label “311” shown inFIG. 3A is inserted within thereport header 210. The tape icon labeled 311 contains a link shown by an arrow inFIG. 3A to the dictation entity (box labeled “Dictation 311”) in theclinical database 190. - Playback can be initiated, for example, by double-clicking on the tape icon labeled 311 within the
report header 210 ofFIG. 3A , which causes the voice recording to play back through the link in theclinical database 190. Playback also can be initiated by activating a virtual tape recorder.FIG. 6 shows a graphical user interface including a virtual tape recorder, according to an exemplary embodiment of the present invention. Referring toFIG. 6 , thevirtual tape recorder 600 includesstop button 610,record button 620 andplay button 630. Thevirtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking onplay button 630 on thevirtual tape recorder 600, activating a function key and/or activating a hardware device associated with a function key. - In an exemplary embodiment of the present invention a user may begin dictation of a part of a report when the user begins recording after clicking on a report part. Referring to
FIG. 3B , information about the location and/or other attributes of the voice recording is stored in theclinical database 190, and a placeholder such as the tape icon having a text label “322” shown inFIG. 3B is inserted within thereport part 220. The tape icon labeled 322 contains a link shown by an arrow inFIG. 3B to the dictation entity (box labeled “Dictation 322”) in theclinical database 190. - In an exemplary embodiment of the present invention, a user may begin dictation of an entire section of a report when the user begins recording after clicking on a report section. Referring to
FIG. 3C , information about the location and/or other attributes of the voice recording is stored in theclinical database 190, and a placeholder such as the tape icon having a text label “331” shown inFIG. 3C is inserted within thereport section 230. The tape icon labeled 331 contains a link shown by an arrow inFIG. 3C to the dictation entity (box labeled “Dictation 331”) in theclinical database 190. Alternatively a user may begin dictation of a portion of a report section when the user begins a recording after marking an insertion point at a user-selected position in the report section. -
FIG. 4 is a flowchart showing a method of dictating into a report section via an electronic network, according to an exemplary embodiment of the present invention. The electronic network may be the Internet. Referring toFIG. 4 , in astep 410, mark a first insertion point in a report section at a first user-selected position, wherein the first user-selected position is selectable to allow a user to record a dictation (comprising electronic audio signals) at any position in the report section. For example, the user may only need to annotate existing information. - The first insertion point may be marked in a blank report section or in a report section containing data. Report sections containing data include new report sections into which data has been entered and previously saved report sections containing data. As used herein, “data” refers to text, image, voice, multi-media, video, electronic file and/or report data. In an exemplary embodiment of the present invention, a method of dictating into a report section via an electronic network includes generating an instance of a report including a report section, before marking a first insertion point in the
step 410. - Marking the first insertion point in the report section at the first user-selected position may comprise positioning a cursor at the first user-selected position in the report section. For example, the cursor may be positioned using a mouse, a keyboard, and/or an assisted device.
- In a
step 420, record a first dictation into the report section at the first insertion point. In an exemplary embodiment of the present invention, recording the first dictation into the report section at the first insertion point comprises activating avirtual tape recorder 600. Thevirtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking onrecord button 620 on thevirtual tape recorder 600, activating a function key and/or activating a hardware device associated with a function key. - The
virtual tape recorder 600 may be stopped, for example, using a mouse, using a keyboard, using an assisted device, clicking onstop button 610 on thevirtual tape recorder 600, activating a function key and/or activating a hardware device associated with a function key. After thevirtual tape recorder 600 is stopped, a text label, icon, tape icon, or tape icon having a text label representing the first dictation may be inserted at the first insertion point. It is to be understood that the text label, icon, tape icon, or tape icon having a text label may be inserted at the first insertion point at any time, e.g., prior to when thevirtual tape recorder 600 is stopped. An icon or text label may include an indication of a dictation type, e.g., a standard dictation type, a permanent dictation type, an annotation type, or a transcribe-on-demand dictation type. - A relative position of the first dictation in the report section may be maintained when data is added to the report section or when data in the report section is edited, modified or deleted. In an exemplary embodiment of the present invention, the relative position of the first dictation is maintained relative to neighboring data in the report section.
- An individual may choose to dictate one voice recording that contains all the data needed for a report section, or dictate one or more voice recordings that comprise portions of the data needed to complete that section. In accordance with an exemplary embodiment of the present invention, a method of dictating into a report section includes marking a second insertion point in the report section at a second user-selected position, wherein the second user-selected position is selectable to allow the user to record a dictation at any position in the report section. A second dictation may be recorded into the report section at the second insertion point. In an exemplary embodiment of the present invention, the relative position of the second dictation is maintained relative to neighboring data in the report section.
- In an exemplary embodiment of the present invention, a secure transcription capability comprises the steps of allowing a user to securely access the first dictation and preventing the user from accessing other data in the report section; and inserting a transcription of the first dictation at the relative position of the first dictation. The transcription of the first dictation may be visually distinct from other data in the report section. For example, the transcribed text may be a different color than that of neighboring text. Visual cues allow the user to quickly and easily identify changes to a report. This can be useful, for example, in situations where a person such as a physician is required to review and/or approve changes in a report.
-
FIG. 5 is a flowchart showing a method of dictating into a report in a flexible manner, according to an exemplary embodiment of the present invention. Referring toFIG. 5 , in astep 510, generate an instance of a report, wherein the report includes at least one of a report header, report part, report section or section data. - In a
step 520, set an indicator to indicate a mode of dictation. Thestep 520 may comprise setting an indicator to indicate either a dictation of a report, a dictation of a report section, or a dictation of a portion of a report section. In an exemplary embodiment of the present invention, setting an indicator to indicate the mode of dictation comprises: setting an indicator to indicate dictation of a report, when a user begins a recording in a blank report after clicking on a report header; setting the indicator to indicate dictation of an entire section, when a user begins a recording after clicking on a report section header; and setting the indicator to indicate dictation of a portion of a report section, when a user begins a recording after marking an insertion point at a user-selected position in the report section. - In the case of a medical report, for example, if a patient is not selected when a dictation is begun, it is assumed that the user intends to record both patient demographic and clinical information. Some voice files that can be recorded are really annotations, defined as voice files that will not be transcribed but will remain as voice files. For example, these can be specified by selecting the annotations indicator for the
virtual tape recorder 600 when a dictation is open, and this information will be maintained in the metadata for the voice file. - Marking the insertion point at the user-selected position in the report section may comprise positioning a cursor at the user-selected position in the report section. For example, the cursor may be positioned using a mouse, a keyboard, and/or an assisted device.
- In a
step 530, record a dictation into the report. In an exemplary embodiment of the present invention, recording the dictation into the report comprises activating avirtual tape recorder 600 to record a dictation into the report. Thevirtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking onrecord button 620 on thevirtual tape recorder 600, activating a function key and/or activating a hardware device associated with a function key. - In an exemplary embodiment of the present invention, a relative position of the dictation in the report is maintained when data is added to the report or when data in the report is edited, modified or deleted. For example, the relative position of the dictation may be maintained relative to neighboring data in the report.
-
FIG. 1 illustrates a computer network suitable for use in accordance with an exemplary embodiment of the present invention. Theapplication service 170 supports the ability for a user to dictate an entire report, a part thereof, a section thereof, or portion of a section thereof, via an electronic network. The electronic network may be the Internet. In an exemplary embodiment of the present invention, a system for allowing a user to selectively dictate into a report via an electronic network includes an instructable data processor for generating an instance of a report. For example, the report may include areport header 210, at least onereport part 220 and at least onereport section 230, as shown inFIG. 2 . The report may further includereport section data 240. - In an exemplary embodiment of the present invention, a system for allowing a user to selectively dictate into a report via an electronic network further includes a display device operatively associated with the instructable data processor for displaying the instance of the report; at least one of a mouse, a keyboard, or an assisted device operatively associated with the display device for marking an insertion point in a report section at a user-selected position; and a microphone operatively associated with the instructable data processor to record a dictation (comprising electronic audio signals) into the report section at the insertion point.
- Hereafter, an exemplary scenario wherein a user dictates into a medical report section, in accordance with an exemplary embodiment of the present invention, will be described with reference to
FIGS. 1 and 3 D. The user marks an insertion point using, e.g., a mouse (clicking into the text sets an insertion point) and then indicates that s/he wishes to record by clicking on therecord button 620 of thevirtual tape recorder 600. (It is to be understood that the user may mark an insertion point using any suitable means including but not limited to a mouse, a keyboard, or an assisted device.) This action causes theapplication service 170 to record the user's voice and save it as avoice recording file 301 on a secure file system. Information about the location and/or other attributes of the voice recording are stored in theclinical database 190, and a placeholder such as tape icon having a text label “341” is inserted within the medicalreport section data 240. This is also the point at which transcribed text would be inserted during the transcription process. The placeholder or tape icon labeled 341 contains a link shown by an arrow inFIG. 3D to the dictation entity, i.e., box labeled “Dictation 341”, in theclinical database 190. - Playback can be initiated in a variety of ways, such as for example, by double-clicking on the tape icon labeled 341 within the medical
report section data 240 ofFIG. 3D , which causes the voice recording to play back through the link in theclinical database 190. Playback can also be initiated by activating thevirtual tape recorder 600. Thevirtual tape recorder 600 may be activated, for example, using a mouse, using a keyboard, using an assisted device, clicking onplay button 630 on thevirtual tape recorder 600, activating a function key and/or activating a hardware device associated with a function key. - A system for allowing a user to selectively dictate into a report via an electronic network, according to exemplary embodiments of the present invention, allows the user to dictate only the amount of data necessary to complete the documentation of e.g., a section of a medical report, and to place the voice recording at exactly the location in the section where it is appropriate.
- A system for allowing a user to selectively dictate into a report via an electronic network, according to exemplary embodiments of the present invention, eliminates the necessity to record an entire section (that may already have some relevant data) or an entire report, and permits the clinical documentation process to be performed by multiple individuals if needed.
- Although the exemplary embodiments of the present invention have been described in detail with reference to the accompanying drawings for the purpose of illustration, it is to be understood that the that the inventive processes and systems are not to be construed as limited thereby. It will be readily apparent to those of ordinary skill in the art that various modifications to the foregoing exemplary embodiments can be made therein without departing from the scope of the invention as defined by the appended claims, with equivalents of the claims to be included therein.
Claims (24)
1. A computer-based method of dictating into a report section via an electronic network, comprising:
marking a first insertion point in a report section at a first user-selected position, the first user-selected position being selectable to allow a user to record a dictation at any position in the report section, the dictation comprising electronic audio signals; and
recording a first dictation into the report section at the first insertion point, the first dictation comprising electronic audio signals.
2. The computer-based method of claim 1 , wherein recording the first dictation into the report section at the first insertion point comprises activating a virtual tape recorder to record the first dictation into the report section at the first insertion point.
3. The computer-based method of claim 1 , wherein the report section comprises a blank report section or a report section containing data.
4. The computer-based method of claim 1 , wherein marking the first insertion point in the report section at the first user-selected position comprises positioning a cursor at the first user-selected position in the report section.
5. The computer-based method of claim 1 , wherein the dictation is either a standard dictation type, a permanent dictation type, an annotation type, or a transcribe-on-demand dictation type.
6. The computer-based method of claim 1 , wherein the electronic network is the Internet.
7. The computer-based method of claim 1 , further comprising stopping the virtual tape recorder.
8. The computer-based method of claim 7 , further comprising placing at least one of a text label, an icon, a tape icon, or a tape icon having a text label associated with the first dictation at the first insertion point.
9. The computer-based method of claim 8 , wherein the text label includes an indication of a type of dictation.
10. The computer-based method of claim 1 , further comprising generating an instance of a report including a report section, before marking a first insertion point in the report section.
11. The computer-based method of claim 1 , further comprising marking a second insertion point in the report section at a second user-selected position, the second user-selected position being selectable to allow the user to record a dictation at any position in the report section.
12. The computer-based method of claim 11 , further comprising recording a second dictation into the report section at the second insertion point.
13. The computer-based method of claim 1 , further comprising maintaining a relative position of the first dictation in the report section when data is added to the report section or when data in the report section is edited, modified or deleted.
14. The computer-based method of claim 13 , wherein maintaining the relative position of the first dictation comprises maintaining the relative position of the first dictation relative to neighboring data in the report section.
15. The computer-based method of claim 13 , further comprising providing a secure transcription capability, the secure transcription capability comprising:
allowing a user to securely access the first dictation and preventing the user from accessing other data in the report section; and
inserting a transcription of the first dictation at the relative position of the first dictation.
16. The computer-based method of claim 15 , wherein the transcription of the first dictation is visually distinct from other data in the report section.
17. A computer-based method of dictating into a report in a flexible manner, comprising:
generating an instance of a report, the report including at least one of a report header, report part, report section or section data;
setting an indicator to indicate a mode of dictation; and
recording a dictation into the report, the dictation comprising electronic audio signals.
18. The computer-based method of claim 17 , wherein recording a dictation into the report comprises activating a virtual tape recorder to record a dictation into the report.
19. The computer-based method of claim 17 , wherein setting an indicator to indicate the mode of dictation comprises setting an indicator to indicate either a dictation of a report, a dictation of a report section, or a dictation of a portion of a report section.
20. The computer-based method of claim 19 , wherein setting an indicator to indicate the mode of dictation comprises:
setting an indicator to indicate dictation of a report, when a user begins a recording in a blank report after clicking on a report header;
setting the indicator to indicate dictation of an entire section, when a user begins a recording after clicking on a report section header; and
setting the indicator to indicate dictation of a portion of a report section, when a user begins a recording after marking an insertion point at a user-selected position in the report section.
21. The computer-based method of claim 20 , wherein marking the insertion point at the user-selected position in the report section comprises positioning a cursor at the user-selected position in the report section.
22. The computer-based method of claim 21 , further comprising maintaining a relative position of the dictation in the report when data is added to the report or when data in the report is edited, modified or deleted.
23. A system for allowing a user to selectively dictate into a report, comprising:
an instructable data processor for generating an instance of a report, the report comprising a report header, at least one report part, and at least one report section;
a display device operatively associated with the instructable data processor for displaying the instance of the report;
at least one of a mouse, a keyboard, or an assisted device operatively associated with the display device for marking an insertion point in a report section at a user-selected position; and
a microphone operatively associated with the instructable data processor to record a dictation into the report section at the insertion point, wherein the dictation comprises electronic audio signals.
24. The system of claim 23 , wherein the user-selected position is selectable to allow the user to record a dictation at any position in the report section, and wherein the dictation comprises electronic audio signals.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/498,956 US20070033033A1 (en) | 2005-03-18 | 2006-08-03 | Dictate section data |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/083,865 US20060212452A1 (en) | 2005-03-18 | 2005-03-18 | System and method for remotely inputting and retrieving records and generating reports |
| US11/498,956 US20070033033A1 (en) | 2005-03-18 | 2006-08-03 | Dictate section data |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US11/083,865 Continuation-In-Part US20060212452A1 (en) | 2005-03-18 | 2005-03-18 | System and method for remotely inputting and retrieving records and generating reports |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20070033033A1 true US20070033033A1 (en) | 2007-02-08 |
Family
ID=37011601
Family Applications (4)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US11/083,865 Abandoned US20060212452A1 (en) | 2005-03-18 | 2005-03-18 | System and method for remotely inputting and retrieving records and generating reports |
| US11/498,951 Expired - Fee Related US7725479B2 (en) | 2005-03-18 | 2006-08-03 | Unique person registry |
| US11/498,955 Expired - Fee Related US7877683B2 (en) | 2005-03-18 | 2006-08-03 | Self-organizing report |
| US11/498,956 Abandoned US20070033033A1 (en) | 2005-03-18 | 2006-08-03 | Dictate section data |
Family Applications Before (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US11/083,865 Abandoned US20060212452A1 (en) | 2005-03-18 | 2005-03-18 | System and method for remotely inputting and retrieving records and generating reports |
| US11/498,951 Expired - Fee Related US7725479B2 (en) | 2005-03-18 | 2006-08-03 | Unique person registry |
| US11/498,955 Expired - Fee Related US7877683B2 (en) | 2005-03-18 | 2006-08-03 | Self-organizing report |
Country Status (2)
| Country | Link |
|---|---|
| US (4) | US20060212452A1 (en) |
| WO (1) | WO2006101770A2 (en) |
Families Citing this family (58)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050209881A1 (en) * | 2004-03-22 | 2005-09-22 | Norton Jeffrey W | Method of tracking home-healthcare services |
| JP2005346495A (en) * | 2004-06-03 | 2005-12-15 | Oki Electric Ind Co Ltd | Information processing system, information processing method, and information processing program |
| US8046673B2 (en) * | 2005-11-07 | 2011-10-25 | Business Objects Software Ltd. | Apparatus and method for facilitating trusted business intelligence through data context |
| US7788296B2 (en) * | 2005-12-29 | 2010-08-31 | Guidewire Software, Inc. | Method and apparatus for managing a computer-based address book for incident-related work |
| US8023621B2 (en) * | 2006-01-17 | 2011-09-20 | LReady, Inc. | Dynamic family disaster plan |
| US8676703B2 (en) | 2006-04-27 | 2014-03-18 | Guidewire Software, Inc. | Insurance policy revisioning method and apparatus |
| US20070282640A1 (en) * | 2006-05-26 | 2007-12-06 | Allmed Resources Llc | Healthcare information accessibility and processing system |
| US7730078B2 (en) * | 2006-09-28 | 2010-06-01 | Honeywell Hommed Llc | Role based internet access and individualized role based systems to view biometric information |
| US8204895B2 (en) * | 2006-09-29 | 2012-06-19 | Business Objects Software Ltd. | Apparatus and method for receiving a report |
| US8126887B2 (en) * | 2006-09-29 | 2012-02-28 | Business Objects Software Ltd. | Apparatus and method for searching reports |
| US7899837B2 (en) | 2006-09-29 | 2011-03-01 | Business Objects Software Ltd. | Apparatus and method for generating queries and reports |
| US8619978B2 (en) * | 2006-12-22 | 2013-12-31 | Pagebites, Inc. | Multiple account authentication |
| JP5004970B2 (en) * | 2006-12-28 | 2012-08-22 | 富士通株式会社 | Method, information processing apparatus, and program for logging in to computer |
| US8184781B2 (en) * | 2007-01-12 | 2012-05-22 | Secureach Systems, Llc | Method and system for communicating information |
| US8930210B2 (en) | 2007-03-29 | 2015-01-06 | Nuance Communications, Inc. | Method and system for generating a medical report and computer program product therefor |
| WO2008121930A1 (en) * | 2007-03-29 | 2008-10-09 | Nesticon, Llc | Creating a report having computer generated narrative text |
| US9098263B2 (en) * | 2007-04-30 | 2015-08-04 | Microsoft Technology Licensing, Llc | Database application assembly and preparation |
| US20090216532A1 (en) * | 2007-09-26 | 2009-08-27 | Nuance Communications, Inc. | Automatic Extraction and Dissemination of Audio Impression |
| US7979793B2 (en) * | 2007-09-28 | 2011-07-12 | Microsoft Corporation | Graphical creation of a document conversion template |
| US9152656B2 (en) * | 2007-11-20 | 2015-10-06 | Microsoft Technology Licensing, Llc | Database data type creation and reuse |
| US20090248740A1 (en) * | 2007-11-20 | 2009-10-01 | Microsoft Corporation | Database form and report creation and reuse |
| US20090150451A1 (en) * | 2007-12-07 | 2009-06-11 | Roche Diagnostics Operations, Inc. | Method and system for selective merging of patient data |
| US20090210516A1 (en) * | 2008-02-15 | 2009-08-20 | Carrier Iq, Inc. | Using mobile device to create activity record |
| JP5426105B2 (en) * | 2008-03-27 | 2014-02-26 | 富士フイルム株式会社 | MEDICAL REPORT SYSTEM, MEDICAL REPORT VIEW DEVICE, MEDICAL REPORT PROGRAM, AND MEDICAL REPORT SYSTEM OPERATING METHOD |
| US8312057B2 (en) * | 2008-10-06 | 2012-11-13 | General Electric Company | Methods and system to generate data associated with a medical report using voice inputs |
| KR20100038536A (en) * | 2008-10-06 | 2010-04-15 | 주식회사 이베이지마켓 | System for utilization of client information in the electronic commerce and method thereof |
| US8874460B2 (en) * | 2009-01-19 | 2014-10-28 | Appature, Inc. | Healthcare marketing data optimization system and method |
| US8489458B2 (en) | 2009-02-24 | 2013-07-16 | Google Inc. | Rebroadcasting of advertisements in a social network |
| US20110071994A1 (en) * | 2009-09-22 | 2011-03-24 | Appsimple, Ltd | Method and system to securely store data |
| US11080790B2 (en) | 2009-09-24 | 2021-08-03 | Guidewire Software, Inc. | Method and apparatus for managing revisions and tracking of insurance policy elements |
| US8595620B2 (en) * | 2009-09-29 | 2013-11-26 | Kwatros Corporation | Document creation and management systems and methods |
| US8429547B2 (en) * | 2009-10-20 | 2013-04-23 | Universal Research Solutions, Llc | Generation and data management of a medical study using instruments in an integrated media and medical system |
| WO2011079208A1 (en) * | 2009-12-24 | 2011-06-30 | Flir Systems, Inc. | Cameras with on-board reporting capabilities |
| TWI400622B (en) * | 2010-11-08 | 2013-07-01 | Inventec Corp | Translation inquiring system for recording self-defining translation interpretation for redisplay and method thereof |
| US8745413B2 (en) * | 2011-03-02 | 2014-06-03 | Appature, Inc. | Protected health care data marketing system and method |
| CN102956125B (en) * | 2011-08-25 | 2014-10-01 | 骅钜数位科技有限公司 | Cloud digital voice teaching recording system |
| US20130191898A1 (en) * | 2012-01-04 | 2013-07-25 | Harold H. KRAFT | Identity verification credential with continuous verification and intention-based authentication systems and methods |
| CN104303204B (en) * | 2012-03-01 | 2018-10-12 | 爱克发医疗保健公司 | System and method for generating medical report |
| US9015073B2 (en) | 2012-06-06 | 2015-04-21 | Addepar, Inc. | Controlled creation of reports from table views |
| RU2536390C2 (en) * | 2012-10-31 | 2014-12-20 | Общество с ограниченной ответственностью "1С" | Automated report generation method |
| CN104064188A (en) * | 2013-03-22 | 2014-09-24 | 中兴通讯股份有限公司 | Method for realizing cloud note with voice turned into characters and device thereof |
| US9860242B2 (en) * | 2014-08-11 | 2018-01-02 | Vivint, Inc. | One-time access to an automation system |
| US9424333B1 (en) * | 2014-09-05 | 2016-08-23 | Addepar, Inc. | Systems and user interfaces for dynamic and interactive report generation and editing based on automatic traversal of complex data structures |
| US9244899B1 (en) | 2014-10-03 | 2016-01-26 | Addepar, Inc. | Systems and user interfaces for dynamic and interactive table generation and editing based on automatic traversal of complex data structures including time varying attributes |
| US10732810B1 (en) | 2015-11-06 | 2020-08-04 | Addepar, Inc. | Systems and user interfaces for dynamic and interactive table generation and editing based on automatic traversal of complex data structures including summary data such as time series data |
| US11443390B1 (en) | 2015-11-06 | 2022-09-13 | Addepar, Inc. | Systems and user interfaces for dynamic and interactive table generation and editing based on automatic traversal of complex data structures and incorporation of metadata mapped to the complex data structures |
| US10372807B1 (en) | 2015-11-11 | 2019-08-06 | Addepar, Inc. | Systems and user interfaces for dynamic and interactive table generation and editing based on automatic traversal of complex data structures in a distributed system architecture |
| US10275450B2 (en) * | 2016-02-15 | 2019-04-30 | Tata Consultancy Services Limited | Method and system for managing data quality for Spanish names and addresses in a database |
| US10629207B2 (en) * | 2017-07-13 | 2020-04-21 | Comcast Cable Communications, Llc | Caching scheme for voice recognition engines |
| WO2019109347A1 (en) * | 2017-12-08 | 2019-06-13 | 深圳迈瑞生物医疗电子股份有限公司 | Data processing method and device |
| CN108564997A (en) * | 2018-04-19 | 2018-09-21 | 北京深度智耀科技有限公司 | A kind of Clinical Report generation method and device |
| US10831872B2 (en) * | 2018-05-08 | 2020-11-10 | Covidien Lp | Automated voice-activated medical assistance |
| US11515018B2 (en) | 2018-11-08 | 2022-11-29 | Express Scripts Strategic Development, Inc. | Systems and methods for patient record matching |
| KR102153668B1 (en) * | 2019-10-29 | 2020-09-09 | 주식회사 퍼즐에이아이 | Automatic Voice Recognizer for medical treatment with keyboard macro function and Voice Recognizing Method thereof |
| US20210322265A1 (en) * | 2020-03-31 | 2021-10-21 | Zoll Circulation, Inc. | Data Management System and Methods for Chest Compression Devices |
| US11507345B1 (en) * | 2020-09-23 | 2022-11-22 | Suki AI, Inc. | Systems and methods to accept speech input and edit a note upon receipt of an indication to edit |
| CN113220293B (en) * | 2021-04-23 | 2024-04-12 | 北京城市网邻信息技术有限公司 | Page display method, page display device, electronic equipment and computer readable medium |
| US11842037B2 (en) * | 2022-02-23 | 2023-12-12 | Capital One Services, Llc | Presentation and control of user interactions with a time-dependent user interface element |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5267155A (en) * | 1989-10-16 | 1993-11-30 | Medical Documenting Systems, Inc. | Apparatus and method for computer-assisted document generation |
| US6026363A (en) * | 1996-03-06 | 2000-02-15 | Shepard; Franziska | Medical history documentation system and method |
| US6067084A (en) * | 1997-10-29 | 2000-05-23 | International Business Machines Corporation | Configuring microphones in an audio interface |
| US20020065854A1 (en) * | 2000-11-29 | 2002-05-30 | Jennings Pressly | Automated medical diagnosis reporting system |
| US20020099552A1 (en) * | 2001-01-25 | 2002-07-25 | Darryl Rubin | Annotating electronic information with audio clips |
| US20020167687A1 (en) * | 1992-02-25 | 2002-11-14 | Irving Tsai | Method and apparatus for linking designated portions of a received document image with an electronic address |
| US20030083577A1 (en) * | 1999-01-29 | 2003-05-01 | Greenberg Jeffrey M. | Voice-enhanced diagnostic medical ultrasound system and review station |
| US20040261013A1 (en) * | 2003-06-23 | 2004-12-23 | Intel Corporation | Multi-team immersive integrated collaboration workspace |
| US20050108261A1 (en) * | 2003-11-04 | 2005-05-19 | Joseph Glassy | Geodigital multimedia data processing system and method |
| US20060041428A1 (en) * | 2004-08-20 | 2006-02-23 | Juergen Fritsch | Automated extraction of semantic content and generation of a structured document from speech |
Family Cites Families (36)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5361202A (en) * | 1993-06-18 | 1994-11-01 | Hewlett-Packard Company | Computer display system and method for facilitating access to patient data records in a medical information system |
| US5583933A (en) * | 1994-08-05 | 1996-12-10 | Mark; Andrew R. | Method and apparatus for the secure communication of data |
| US6177940B1 (en) * | 1995-09-20 | 2001-01-23 | Cedaron Medical, Inc. | Outcomes profile management system for evaluating treatment effectiveness |
| US6684188B1 (en) * | 1996-02-02 | 2004-01-27 | Geoffrey C Mitchell | Method for production of medical records and other technical documents |
| US6031625A (en) * | 1996-06-14 | 2000-02-29 | Alysis Technologies, Inc. | System for data extraction from a print data stream |
| DE69616093D1 (en) * | 1996-07-03 | 2001-11-22 | Sopheon N V | SYSTEM FOR SUPPORTING THE PRODUCTION OF DOCUMENTS |
| US5870559A (en) * | 1996-10-15 | 1999-02-09 | Mercury Interactive | Software system and associated methods for facilitating the analysis and management of web sites |
| US6393431B1 (en) * | 1997-04-04 | 2002-05-21 | Welch Allyn, Inc. | Compact imaging instrument system |
| US6573907B1 (en) * | 1997-07-03 | 2003-06-03 | Obvious Technology | Network distribution and management of interactive video and multi-media containers |
| US6016476A (en) * | 1997-08-11 | 2000-01-18 | International Business Machines Corporation | Portable information and transaction processing system and method utilizing biometric authorization and digital certificate security |
| US6490620B1 (en) * | 1997-09-26 | 2002-12-03 | Worldcom, Inc. | Integrated proxy interface for web based broadband telecommunications management |
| US6337858B1 (en) * | 1997-10-10 | 2002-01-08 | Nortel Networks Limited | Method and apparatus for originating voice calls from a data network |
| US6259352B1 (en) * | 1998-03-02 | 2001-07-10 | Leon Yulkowski | Door lock system |
| US6801916B2 (en) * | 1998-04-01 | 2004-10-05 | Cyberpulse, L.L.C. | Method and system for generation of medical reports from data in a hierarchically-organized database |
| US6311190B1 (en) * | 1999-02-02 | 2001-10-30 | Harris Interactive Inc. | System for conducting surveys in different languages over a network with survey voter registration |
| US6591272B1 (en) * | 1999-02-25 | 2003-07-08 | Tricoron Networks, Inc. | Method and apparatus to make and transmit objects from a database on a server computer to a client computer |
| US20040220830A1 (en) * | 1999-10-12 | 2004-11-04 | Advancepcs Health, L.P. | Physician information system and software with automated data capture feature |
| US20040078236A1 (en) * | 1999-10-30 | 2004-04-22 | Medtamic Holdings | Storage and access of aggregate patient data for analysis |
| WO2001059687A1 (en) * | 2000-02-09 | 2001-08-16 | Patientpower.Com, Llc | Method and system for managing patient medical records |
| US7139686B1 (en) * | 2000-03-03 | 2006-11-21 | The Mathworks, Inc. | Report generator for a mathematical computing environment |
| EP1312219A2 (en) * | 2000-08-25 | 2003-05-21 | Intellocity USA, Inc. | Method of enhancing streaming media content |
| US6681229B1 (en) * | 2000-09-07 | 2004-01-20 | International Business Machines Corporation | System and method for providing a relational database backend |
| US7096416B1 (en) * | 2000-10-30 | 2006-08-22 | Autovod | Methods and apparatuses for synchronizing mixed-media data files |
| US7734480B2 (en) * | 2000-11-13 | 2010-06-08 | Peter Stangel | Clinical care utilization management system |
| US7072725B2 (en) * | 2001-03-26 | 2006-07-04 | Medtronic, Inc. | Implantable therapeutic substance infusion device configuration system |
| US20020162116A1 (en) * | 2001-04-27 | 2002-10-31 | Sony Corporation | VoIP telephony peripheral |
| US7529685B2 (en) * | 2001-08-28 | 2009-05-05 | Md Datacor, Inc. | System, method, and apparatus for storing, retrieving, and integrating clinical, diagnostic, genomic, and therapeutic data |
| US6978268B2 (en) * | 2002-03-16 | 2005-12-20 | Siemens Medical Solutions Health Services Corporation | Healthcare organization central record and record identifier management system |
| US7716072B1 (en) * | 2002-04-19 | 2010-05-11 | Greenway Medical Technologies, Inc. | Integrated medical software system |
| US20040153337A1 (en) * | 2003-02-05 | 2004-08-05 | Cruze Guille B. | Automatic authorizations |
| US7299410B2 (en) * | 2003-07-01 | 2007-11-20 | Microsoft Corporation | System and method for reporting hierarchically arranged data in markup language formats |
| US7860727B2 (en) * | 2003-07-17 | 2010-12-28 | Ventana Medical Systems, Inc. | Laboratory instrumentation information management and control network |
| US20050209892A1 (en) * | 2004-03-19 | 2005-09-22 | Miller Jonathan K | [Automated system and method for providing accurate, non-invasive insurance status verification] |
| US7549118B2 (en) * | 2004-04-30 | 2009-06-16 | Microsoft Corporation | Methods and systems for defining documents with selectable and/or sequenceable parts |
| US7721195B2 (en) * | 2004-08-24 | 2010-05-18 | Oracle International Corporation | RTF template and XSL/FO conversion: a new way to create computer reports |
| US7617450B2 (en) * | 2004-09-30 | 2009-11-10 | Microsoft Corporation | Method, system, and computer-readable medium for creating, inserting, and reusing document parts in an electronic document |
-
2005
- 2005-03-18 US US11/083,865 patent/US20060212452A1/en not_active Abandoned
-
2006
- 2006-03-09 WO PCT/US2006/008545 patent/WO2006101770A2/en not_active Ceased
- 2006-08-03 US US11/498,951 patent/US7725479B2/en not_active Expired - Fee Related
- 2006-08-03 US US11/498,955 patent/US7877683B2/en not_active Expired - Fee Related
- 2006-08-03 US US11/498,956 patent/US20070033033A1/en not_active Abandoned
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5267155A (en) * | 1989-10-16 | 1993-11-30 | Medical Documenting Systems, Inc. | Apparatus and method for computer-assisted document generation |
| US20020167687A1 (en) * | 1992-02-25 | 2002-11-14 | Irving Tsai | Method and apparatus for linking designated portions of a received document image with an electronic address |
| US6026363A (en) * | 1996-03-06 | 2000-02-15 | Shepard; Franziska | Medical history documentation system and method |
| US6067084A (en) * | 1997-10-29 | 2000-05-23 | International Business Machines Corporation | Configuring microphones in an audio interface |
| US20030083577A1 (en) * | 1999-01-29 | 2003-05-01 | Greenberg Jeffrey M. | Voice-enhanced diagnostic medical ultrasound system and review station |
| US20020065854A1 (en) * | 2000-11-29 | 2002-05-30 | Jennings Pressly | Automated medical diagnosis reporting system |
| US20020099552A1 (en) * | 2001-01-25 | 2002-07-25 | Darryl Rubin | Annotating electronic information with audio clips |
| US20040261013A1 (en) * | 2003-06-23 | 2004-12-23 | Intel Corporation | Multi-team immersive integrated collaboration workspace |
| US20050108261A1 (en) * | 2003-11-04 | 2005-05-19 | Joseph Glassy | Geodigital multimedia data processing system and method |
| US20060041428A1 (en) * | 2004-08-20 | 2006-02-23 | Juergen Fritsch | Automated extraction of semantic content and generation of a structured document from speech |
Also Published As
| Publication number | Publication date |
|---|---|
| US20070033174A1 (en) | 2007-02-08 |
| US7725479B2 (en) | 2010-05-25 |
| US20060212452A1 (en) | 2006-09-21 |
| US20070038948A1 (en) | 2007-02-15 |
| WO2006101770A2 (en) | 2006-09-28 |
| US7877683B2 (en) | 2011-01-25 |
| WO2006101770A3 (en) | 2008-02-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20070033033A1 (en) | Dictate section data | |
| US7502741B2 (en) | Audio signal de-identification | |
| US20080301805A1 (en) | Methods of communicating object data | |
| US20090138287A1 (en) | System and method for assigning, recording and monitoring MS-DRG codes in a patient treatment facility | |
| US20150046189A1 (en) | Electronic health records system | |
| US20250006318A1 (en) | Electronic data document for use in clinical trial verification system and method | |
| US7464043B1 (en) | Computerized method and system for obtaining, storing and accessing medical records | |
| US20170052944A1 (en) | Content digitization and digitized content characterization systems and methods | |
| Kamal | Implementation of electronic medical records in developing countries: challenges & barriers | |
| Garba et al. | Significance and challenges of medical records: A systematic literature review | |
| US8019620B2 (en) | System and method for medical privacy management | |
| US20070033535A1 (en) | System and method for information entry in report section | |
| Funmilola et al. | Development of an electronic medical record (EMR) system for a typical Nigerian hospital | |
| C. David et al. | Error rates in physician dictation: quality assurance and medical record production | |
| Mukherjee et al. | Virtual consent for virtual patients: benefits of implementation in a peri-and post-COVID-19 era | |
| Burbridge | Dicom image anonymization and transfer to create a diagnostic radiology teaching file | |
| McAndrew et al. | A comparison of computer‐and hand‐generated clinical dental notes with statutory regulations in record keeping | |
| Solon et al. | Preparing evidence for court | |
| Lulembo et al. | Improving healthcare delivery with the use of online patient information management system | |
| US20190198139A1 (en) | Systems and methods for securing electronic data that includes personally identifying information | |
| Nu’man et al. | Root Cause Analysis And Strategies To Improve Outpatient Pharmacy Services | |
| Bote et al. | Evaluation of healthcare institutions for long-term preservation of electronic health records | |
| Rockel | Stedman's guide to the HIPAA privacy rule | |
| Edmund et al. | Impact of E-Records on the Healthcare Industry | |
| Bista et al. | Medical transcription outsourcing and internet-enabling services |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |