[go: up one dir, main page]

HK1195383A - Color vision deficit correction - Google Patents

Color vision deficit correction Download PDF

Info

Publication number
HK1195383A
HK1195383A HK14108736.6A HK14108736A HK1195383A HK 1195383 A HK1195383 A HK 1195383A HK 14108736 A HK14108736 A HK 14108736A HK 1195383 A HK1195383 A HK 1195383A
Authority
HK
Hong Kong
Prior art keywords
image
virtual image
see
user
display device
Prior art date
Application number
HK14108736.6A
Other languages
Chinese (zh)
Inventor
T.安布鲁斯
A.史密斯-基普尼斯
S.拉塔
D.麦卡洛克
B.芒特
K.盖斯那
I.麦克恩特瑞
Original Assignee
Microsoft Technology Licensing, Llc
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Publication of HK1195383A publication Critical patent/HK1195383A/en

Links

Description

Color vision deficiency correction
Background
Many people have difficulty distinguishing colors from each other. This condition is known as "color vision deficiency" and is colloquially referred to as "color blindness". Several different forms of color vision deficiency are identified, including red-green double-color blindness (red blindness, green blindness), abnormal red-green trichromatic-color blindness (red weakness and green weakness), blue-yellow double-color blindness (blue-yellow blindness), and abnormal blue-yellow trichromatic blindness (blue-yellow weakness). Each form is caused by the expression of a recessive gene trait that reduces the variety of retinal cones in the affected human eye, or renders some of these cones less sensitive. These features are carried primarily on the Y chromosome and can therefore affect 7% to 10% of the male population, and about 0.5% of the female population. Achromatopsia (monochromatic achromatopsia) is also identified as a color vision deficiency associated with injury.
In society, color vision deficiency can cause a degree of disability. For example, it may burden the affected person with the ability to recognize traffic signals or other signs. It may disqualify people working in areas where sensitive color perception is required. Furthermore, color vision deficiencies may hinder the overall perception and enjoyment of the visual world by the affected person. Unfortunately, there is no medical therapy or treatment for color vision deficiency.
Disclosure of Invention
One embodiment of the present invention provides a method for improving the color resolution capability of a user of a see-through display device. The method is implemented within a see-through display device and includes constructing and displaying a virtual image to be superimposed on a real image seen by a user through the see-through display device. The virtual image is configured to emphasize sites (locas) of colors in the real image that are not well resolved by the user. Such a virtual image is then displayed by: the virtual image is superimposed on the real image in registration with the real image in the user field of view.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any disadvantages noted in any part of this disclosure.
Drawings
FIG. 1A illustrates aspects of an augmented reality environment according to an embodiment of the invention.
FIG. 1B illustrates an example field of view of a user in the augmented reality environment of FIG. 1A.
Fig. 2 and 3 illustrate example see-through display devices according to various embodiments of the invention.
FIG. 4 illustrates aspects of an example optical component of a see-through display device according to an embodiment of the invention.
FIG. 5 illustrates an example method for improving color resolution capability of a user of a see-through display device according to an embodiment of this disclosure.
FIG. 6 shows how a selected real image may look to the green amblyopia before applying the methods described herein.
Fig. 7-13 show examples of how the real image of fig. 6 would look to a person with a weak green color after applying the methods described herein.
FIG. 14 illustrates another example method for improving color resolution capability of a user of a see-through display device in accordance with an embodiment of the present invention.
FIG. 15 schematically shows aspects of an example computing system according to an embodiment of the invention.
Detailed Description
Aspects of the invention will now be described, by way of example, with reference to the above-listed embodiments shown. Components, process steps, and other elements that may be substantially the same in one or more embodiments are identified coordinately and are described with minimal repetition. It should be noted, however, that elements identified coordinately may also differ to some extent. It is also noted that the drawings included in the present invention are schematic and generally not drawn to scale. Rather, the various drawing scales, aspect ratios, and numbers of components shown in the figures may be purposely distorted to make particular features or relationships more apparent.
Embodiments of an Augmented Reality (AR) method for improving a person's color resolving power are described. AR enables a person to view real world images as well as computer generated virtual images. The AR system may include a see-through display device that is worn by a person and through which the real and virtual images are combined in the same field of view. Such devices may be incorporated in goggles, helmets, glasses or other items worn on the eyes.
Before discussing these embodiments in detail, aspects of an example AR environment 10 are first described with reference to fig. 1A. To experience augmented reality, user 12 may employ an AR system with a suitable display, sensors, and computing hardware. In the embodiment shown in FIG. 1A, the AR system includes a cloud 14 and a see-through display device 16. "cloud" is a term used to describe a computer system accessible over a network and configured to provide computing services. In the present context, a cloud may be included in any number of computers.
See-through display device 16 is a wearable device configured to present real and virtual images to its wearer (i.e., user 12). More specifically, the see-through display device enables a user to view real-world images in conjunction with computer-generated virtual images. Images from both sources are presented in the user's field of view and may appear to share the same physical space. This scene is represented in FIG. 1B, which shows an example field of view of the user 12, including a virtual image (dragon) that can be viewed along with real-world objects. As will be described in more detail below, the see-through display device may comprise a computer. Thus, some of the computer programs that provide the AR environment may be executed in a see-through display device. Other computer programs may execute within the cloud 14, the cloud 14 being operatively coupled to the see-through display device by one or more wireless communication links. Such links may include cellular, Wi-Fi, and others.
In some scenarios, the computer program providing the AR experience may include a game. More generally, the program may be any program that combines a computer-generated virtual image and a real-world image. A realistic AR experience may be obtained by each user viewing their surroundings naturally through the transparent optical elements of the see-through display device. The virtual image is projected simultaneously in the same field of view as the received real image.
FIG. 2 illustrates an example see-through display device 16 in one embodiment. See-through display device 16 is a helmet having a controllable dimming filter 18 in the form of a visor. The dimming filter may be configured for glare reduction and/or brightness reduction of real images received through the see-through display device. Between the dimming filter and each of the wearer's glasses there is a microdisplay 20 and a glasses tracker 22. The microdisplay 20A and the eyewear tracker 22A are disposed in front of the right eye; and the microdisplay 20B and the glasses tracker 22B are arranged in front of the left eye. Although the eye trackers are arranged at the rear of the microdisplay in the figure, they may instead be arranged at the front of the microdisplay or distributed at various locations within the see-through display device. The see-through display device 16 also includes a computer 24. The computer is operably coupled to two microdisplays and two eye tracker.
Each microdisplay 20 may be at least partially transparent, providing a substantially unobstructed field of view that enables a user to directly view the physical environment surrounding it. Each microdisplay is configured to present a virtual image in the form of a computer-generated display image in the same field of view. The virtual image stack is superimposed over the real image in registration with the real image as seen by a user of the see-through display device. In other words, the real image and the virtual image share a common coordinate system, so that, for example, a virtual dragon spiraling 50 meters north of a real building stays at the correct position relative to the building even when the user turns his head.
With continued reference to FIG. 2, the computer 24 controls the internal components of the microdisplays 20A and 20B to form the desired display image. In an embodiment, the computer 24 may cause the microdisplays 20A and 20B to simultaneously display the same image so that the wearer's right and left eyes receive the same image at the same time. In another embodiment, the microdisplays may simultaneously project stereoscopically related images so that the wearer perceives a three-dimensional image. In one scenario, the computer-generated display image and the respective real images of the object seen through the microdisplay may occupy different focal planes. Accordingly, a wearer observing a real-world object may have to shift his corneal focus to recognize the display image. In other cases, the display image and the at least one real image may share a common focal plane.
Each eyewear tracker 22 includes a detector configured to detect a visual state of a wearer of see-through display device 16. The eye tracker may determine the position of the pupil of the wearer's eye, locate the wearer's line of sight, and/or measure the degree to which the iris is closed. If two substantially identical eye tracker devices (one for each eye) are included, they may be used together to determine the wearer's focal plane based on the convergence point of the wearer's left and right eye lines of sight. This information may be used, for example, for placement of one or more virtual images.
Fig. 3 shows another example see-through display device 16'. The see-through display device 16' is an example of AR glasses. AR glasses may be very similar to a pair of ordinary glasses or sunglasses, but also include microdisplays 20A and 20B and eye trackers 22A and 22B arranged at the rear of dimming filters 18A and 18B. The see-through display device 16 'includes a wearable mount 26 that positions the microdisplay and the eyewear tracker a short distance in front of the wearer's eyewear. In the embodiment of fig. 3, the wearable mount takes the form of a conventional eyeglass frame.
Neither aspect of fig. 2 or 3 is intended to be limiting in any sense, and thus, variations are implemented. In some embodiments, a two-eye microdisplay, for example, extending over both eyes may be used instead of the single-eye microdisplay shown in the figure. Also, the see-through display device may include a binocular tracker. In some embodiments, the eye tracker and the microdisplay may be integrated together or may share one or more components.
FIG. 4 illustrates example optical assembly aspects of the see-through display device 16. In the illustrated embodiment, the microdisplay 20 includes an illuminator 28 and an image generator 30. The illuminator may comprise a white light source, such as a white Light Emitting Diode (LED). The illuminator may also include optics adapted to collimate the emission of the white light source and direct the emission into the image generator. The image generator also includes a rectangular light valve array, such as a Liquid Crystal Display (LCD) array. The light valves in the array may be arranged to spatially vary and temporally modulate collimated light emitted through the array of light valves to form pixels of the display image 32. Further, the image generator may include suitable filtering elements in registration with the light valves so that the generated display image is a color image. The information content of the display image 32 is applicable to the microdisplay 20 as is any suitable data structure, such as a digital image or digital video data structure.
In another embodiment, illuminator 28 may include one or more modulated lasers, and image generator 30 may be moving optics configured to rasterize the emission of the lasers in synchronization with the modulation to form display image 32. In yet another embodiment, image generator 30 may include a rectangular array of modulated color LEDs arranged to form a display image. The illuminator 28 may be omitted from this embodiment when each color LED array emits its own light. Various active components of the microdisplay 20, including the image generator 30, are operatively coupled to the computer 24. In particular, the computer provides suitable control signals which, when received by the image generator, result in the formation of the desired display image.
With continued reference to fig. 4, the microdisplay 20 includes multipath optics 34. The multipath optics is suitably transparent, allowing external images, such as a real image 36 of a real object, to be seen directly through it. The image generator 30 is arranged to project a display image 32 into the multipath optics. The multipath optics are configured to reflect the display image into the pupil 38 of the wearer of the see-through display device 16. To reflect the display image and transmit the real image to the pupil 38, the multipath optic 34 may include a partially reflective, partially transparent structure.
In some embodiments, the multipath optic 34 may be configured to have optical power. Which may be used to direct the display image 32 to the pupil 38 with a controlled vergence such that the display image is provided as a virtual image in the desired focal plane. In other embodiments, the location of the virtual display image may be determined by the converging power of the lens 40. In an embodiment, the focal length of the lens 40 may be adjustable such that the focal plane of the displayed image may move back and forth in the wearer's field of view. In fig. 4, the apparent position of the virtual display image 32 is shown by way of example at 42. In other embodiments, the focal length of the lens 40 may be fixed such that the focal plane of the displayed image is maintained at or near infinity. However, the focal plane of the displayed image can still be moved back and forth by providing a stereoscopically related image to the microdisplays of each eye.
Fig. 4 also shows a camera 44 that receives real images seen by the wearer of see-through display device 16 through see-through display device 16. In the embodiment shown in fig. 4, the camera receives the real image through multipath optics 34, which decompose the real image into a first portion that passes through the pupil 38 and a second portion that is focused on the camera. In this configuration, the camera 44 may be referred to as a "forward-facing" video camera regardless of the actual orientation of the aperture.
FIG. 4 also shows aspects of the eyewear tracker 22 including an illuminator 46 and a detector 48. The illuminator may comprise a low power infrared LED or a diode laser. In an embodiment, the illuminator may provide periodic illumination in the form of narrow pulses-e.g., 1 microsecond pulses each spaced 50 microseconds apart. The detector may be any camera suitable for imaging the wearer's glasses in sufficient detail to resolve the pupil. More specifically, the resolution of the detector is sufficient to enable estimation of the position of the pupil relative to the orbit and the degree to which the iris is closed. In an embodiment, the aperture of the detector is equipped with a wavelength filter that is bandwidth matched in transmittance to the output wavelength of the illuminator. Also, the detector may include an electronic "shutter" that is synchronized with the pulsed output of the illuminator.
FIG. 4 also illustrates various aspects of adjustable light filter 18 having an orthogonal polarizing layer 50 disposed thereon. The orthogonal polarizing layers are configured to reduce the light transmittance of the see-through display device for real images seen by the wearer. In one embodiment, the orthogonal polarizing layers may comprise electrically polarizable liquid crystals; the light transmittance can be reduced by increasing the polarization applied to the liquid crystal.
In addition to providing a premium AR experience, the configuration described above may also be used for some other purpose. For example, a properly configured see-through display device may be used to correct color vision deficiencies or provide extended color vision to its wearer. More particularly, the configurations described herein allow for various methods for improving the color resolution capability of a user of a see-through display device. Some such methods are now described, by way of example, with continued reference to the above configurations. It is to be understood, however, that the methods described herein, as well as other equivalents falling fully within the scope of the present invention, may be implemented in other configurations. Moreover, in some embodiments, certain method steps described and/or illustrated herein may be omitted without departing from the scope of the present invention. Likewise, the indicated order of the method steps is not necessarily required to achieve the intended results, but is provided for the aspects illustrated and described. One or more of the illustrated acts, functions, or operations may be performed repeatedly, depending on the particular strategy being used.
FIG. 5 illustrates an example method 52 for improving color resolution capabilities of a user of a see-through display device (e.g., see-through display device 16). The method may be implemented in a see-through display device, at least partially in the cloud, or in any other suitable manner.
At the beginning of method 52, input from the user may be accepted by the see-through display device to help determine how to enhance the various color representations for the user. The input may directly or indirectly control which color or colors are to be enhanced by the see-through display device. Various modes of operation are contemplated herein. For example, in one embodiment, at 54, the user may specify a particular type of color vision deficiency that he or she experiences-e.g., red blind, green blind, red dim, green dim, blue-yellow blind, or blue-yellow dim. Optionally, the input may also specify the degree or sensitivity of color vision deficiency. In another embodiment, at 56, the user may specify one or more colors in the real image that should be emphasized — e.g., red, green, blue, yellow, or no color at all. In a typical scene, the specified color will be a color that is not well resolved by the user's unassisted eyewear.
In still other embodiments, at 57, user input may be received in response to a visual test presented to the user to allow the see-through display device to determine a condition to correct. As a more specific example, the see-through display device may display one or more images for testing for color vision defects, such as a stone color test image, and one or more user inputs in the form of responses to those images may be received. Based on the results of such color tests, the see-through display device may be adjusted to suit one or more color vision deficiencies of a particular user.
In still other embodiments, the see-through display device may be configured to discern color vision deficiencies indirectly and without the user's knowledge. For example, the device may be configured to measure how long it takes for a user to select a green icon on a red subway tile compared to selecting a blue icon on a red subway tile. An extended time to select the green icon may indicate some form of red-green defect. Additionally or alternatively, the device may be configured to measure the length of time a user spends looking at selected aspects of the scene (i.e. those aspects that are difficult for a person with color vision deficiency to resolve). This delay time may be compared to the average delay time for a user viewing the same or similar scene. Effectively, the device is able to assess whether the mark on the building is as prominent for the user as it would for the average person. If not, a color vision deficiency may be indicated.
In various embodiments contemplated herein, the user input accepted in method 52 may specify color vision deficiency, a color to be enhanced, both deficiency and color, or neither. Further, the user input may take any suitable form-from voice input, gesture input received via an outward facing image sensor as described above, from a preference file stored in removable memory or cloud 14, and/or any other suitable form.
With continued reference to fig. 5, at 58, an image of the real image seen by the user is obtained. In one embodiment, the image may be obtained using a forward facing camera of a see-through display device as described above.
At 60, locations of colors that are not well resolved by the user are identified from the acquired image and submitted for further processing. The site referred to herein may be one of a plurality of contiguous or non-contiguous sites identified and submitted for further processing. In embodiments where the user input specifies a particular color, one or more locations of that color may be identified. For example, if the user input says "show me everything in red," a locus that includes all red pixels of the obtained image may be identified. In embodiments where the user input specifies a particular color vision deficiency, the location may be identified based on the characteristics of the deficiency. For example, if a user identifies himself as exhibiting a weak green, one of two possible scenarios may be implemented. A site that includes all red pixels or all green pixels of the image may be identified and submitted for further processing. In one embodiment, the decision to identify a red pixel or a green pixel may be based on a user's preference. In another embodiment, the decision to identify a red pixel or a green pixel may be based on the relative prevalence of red and green in the real image. For example, it may be desirable to identify colors that appear less common in real images.
In these and other embodiments, identifying locations of colors that are not well resolved by the user may include identifying objects in the obtained image. In a typical scenario, the objects to be identified may include various kinds of symbols-road signs, warning signs, stop signs, and so on. For example, any octagonal shaped locus having a stop sign can be identified from the obtained image, regardless of whether the see-through display device is able to recognize a locus having a red color. In some cases, machine vision may have difficulty determining color reliably in low brightness conditions, or in the case of strong specular reflection.
At 62 of method 52, a virtual image is constructed that is superimposed onto the real image that the user sees through the see-through display device. The virtual image may be configured to augment the locations identified above in the real image-i.e., locations of colors that are not well-resolved by the user. More specifically, the virtual image may be constructed based on the image of the real image obtained at 58.
In one embodiment, the virtual image may be configured to transform the color of the locus to a color that is better distinguishable by the user. The method makes use of adding both real and virtual images in a see-through display device. For example, by covering sites of a consistent purple color of appropriate intensity, green sites can be made to appear more bluish. A user exhibiting any form of red-green color vision deficiency will find the enhanced sites more readily distinguishable, particularly from the red background, than the original greenish-pigmented sites. This approach is illustrated in fig. 6 and 7, where fig. 6 shows how a wintergreen tree may look to a person with weak greenness before strengthening the red berry, and fig. 7 shows how a wintergreen tree may look after strengthening. In such embodiments, the controllable dimming filter arranged in the see-through display device may be configured to controllably reduce the brightness of the real image to obtain a desired overall brightness level. In some scenarios, judicious use of a dimming filter for color conversion may enable a microdisplay to operate at lower power, thereby extending battery life.
In another embodiment, the virtual image may be configured to increase only the brightness of the identified location without changing color. This approach may be useful for users with mild color vision disorders. The method may involve lightening each red locus to make it redder and/or each green locus to make it greener. By increasing the brightness of these colors, a slightly handicapped user may find these colors to become more easily distinguishable. This method is illustrated in fig. 8.
In another embodiment, the virtual image may be configured to shift colors differently for the right and left eyes or to increase the brightness of the identified loci. For example, the virtual image may highlight the green for the left eye and the red for the right eye, or vice versa. A user with a red-green color perception deficiency may be able to know over time to associate different highlights with different colors, just as one would know after only a few minutes to adapt the red-green polarization to the red/green 3D glasses. In embodiments that include a subtractive display, the red and green colors in the identified location may be darkened, rather than lightened.
In another embodiment, the virtual image may be configured to delineate the perimeter of the locus. If the locus of the real image is a red locus, for example, the virtual image may include a narrow line that appears in cyan (a color that is complementary to red). If the intensity of the virtual image is chosen to balance with the intensity of the real image, the red locus will appear superimposed with a white outline, as shown in FIG. 9. However, it should be understood that according to the present method, a perimeter of any color that is brighter than the original location may be obtained. Also, controllable dimming filters may be used in some cases to reduce the brightness of real-world images.
In another embodiment, the virtual image may be configured to overwrite sites in the virtual image with text-e.g., white text compensated for the color of the sites, or text of any color that is brighter and/or has suitable contrast with the sites that appear to be original to a user of a particular color vision deficiency. This method is illustrated in fig. 10. In another embodiment, the virtual image may be configured to overwrite a site with a symbol or segmentation map, as shown in fig. 11 and 12, respectively.
In yet another embodiment, the virtual image may be configured to write text and/or symbols near the locus. This method is illustrated in fig. 13. It should be understood that the virtual image may be configured to implement various combinations of the above-identified site enhancement methods. For example, a given site may be rendered color transformed and segmented, if desired.
As described above, in embodiments where the sites are identified via object recognition, the virtual image may be further constructed based on characteristics of the recognized objects. In particular, the virtual image may be configured to emphasize identified objects over and beyond other sites emphasized by the virtual image. Autumn leaves may for example be red, but the stop sign may also be red. In a scene where all red pixels are emphasized, the stop flag may be emphasized differently, or more highly, or in general may be emphasized in a way that the emphasis of the leaves is not included. In this way, the ability of the user to resolve identified objects related to security is given a higher priority.
Turning now back to fig. 5, at 64, the brightness of the real image seen by the wearer is optionally reduced via the dimming filter of the see-through display device. The amount of brightness reduction may affect the desired level of overall brightness perceived by the user in the enhanced image as needed to provide color transformation at reduced power, and so forth. At 66, a virtual image constructed as described above is displayed. More specifically, a virtual image is superimposed over a real image seen by a user of a see-through display device in registration with the real image in the user field of view.
Any aspect of fig. 5 should not be taken in a limiting sense, as numerous variations and extensions are contemplated. For example, a user of a see-through display device may need help in identifying matching colors in certain scenarios, such as in a wardrobe. Here, the virtual image may be constructed in such a manner as: i.e., a site that enhances such color by overwriting the sites of multiple matching colors in the real-world image with text, symbols, and one or more of the segmentation maps to indicate such matching colors.
In other embodiments, the methods presented herein may be fine-tuned with the help of a glasses tracker arranged in a see-through display device. In one embodiment, the glasses tracker may be configured to track a user's focus point. The virtual image constructed at 62 of the above method may be configured to: when a poorly resolved site is closer to the focal point, the site is emphasized to a greater extent than when the site is farther from the focal point. This approach may be useful for reducing virtual "clutter" in the user's field of view, or for directing computational effort to areas that are actually seen.
FIG. 14 illustrates another example method 68 for improving color resolution capability of a user of a see-through display device. Similar to method 52 of fig. 5, the method may be implemented in a see-through display device or in any other suitable manner.
At 58, an image of the real image seen by the user is obtained. At 60, locations of colors that are not well resolved by the user are identified from the obtained image. At 62, a first virtual image is constructed. Such an image may be substantially identical to the previously mentioned virtual image, i.e. the virtual image to be superimposed on the real image seen by the user through the see-through display device. As previously described, the virtual image may be configured as a locus of colors that are not well-resolved by the user in the augmented reality image.
At step 70, a request for a second virtual image is received. The request may originate from an application or operating system running on the see-through display device or elsewhere in the AR environment. The second virtual image may include virtually any desired image-text or graphics; it may or may not be related to the real image seen by the user. At 72, a second virtual image is constructed.
In various embodiments contemplated herein, the second virtual image may be configured such that it is user-resolvable when superimposed on the real image seen by the user. Assume, for example, that a user with a weak green color is looking at a real image dominated by green. The user may have difficulty resolving the red virtual image overlaid on such a background. Thus, in an embodiment, the see-through display device may be configured to change the display color of the requested second virtual image so that it becomes more readily discernible to users of color vision disorders. Thus, in one embodiment, the red pixels of the requested second virtual image may be rendered purple. In a more specific embodiment, the second virtual image may be selected to be user-discernable when superimposed on the real image along with the first virtual image.
At 66, the first and second virtual images constructed as described above are displayed. More specifically, the first and second virtual images are superimposed on the real image in registration with the real image seen by a user of the see-through display device in the user field of view.
In some embodiments, the above described methods and processes may be bound to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as computer applications or services, Application Programming Interfaces (APIs), libraries, and/or other computer program products.
FIG. 15 schematically illustrates a non-limiting embodiment of a computing system 24' that may implement one or more of the methods or processes described above. Computing system 24' is shown in simplified form. It should be understood that virtually any computer architecture can be used without departing from the scope of the invention. In different embodiments, the computing system 24' may take the form of a mainframe computer, server computer, desktop computer, laptop computer, tablet computer, home entertainment computer, network computing device, gaming device, removable computing device, removable communication device (e.g., smart phone), and so forth.
The computing system 24' includes a logic subsystem 74 and a storage subsystem 76. Computing system 24 ' optionally includes display subsystem 20 ', input subsystem 22 ', communication subsystem 78, and/or other components not shown in fig. 15.
Logic subsystem 74 includes one or more physical devices configured to execute instructions. For example, the logic subsystem may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, or otherwise arrive at a desired result.
The logic subsystem may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic subsystem may be single-core or multi-core, and the programs executing thereon may be configured for serial, parallel, or distributed processing. The logic subsystem may optionally include individual components distributed among two or more devices, which may be remotely located and/or configured for coordinated processing in some embodiments. Aspects of the logic subsystem may be virtualized and executed by remotely accessible network computing devices configured in a cloud computing configuration.
Storage subsystem 76 includes one or more physical, non-transitory devices configured to hold data and/or instructions executable by the logic subsystem to implement the herein described methods and processes. In implementing such methods and processes, the state of the storage subsystem 76 may be transformed — e.g., to hold different data.
The storage subsystem 76 may include removable computer-readable storage media and/or built-in devices. Storage subsystem 76 may include optical memory devices (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory devices (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The storage subsystem 76 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
In some embodiments, aspects of the logic subsystem 74 and the storage subsystem 76 may be integrated together in one or more hardware logic components by which the functions described herein may be implemented, at least in part. Such hardware logic components may include: for example, Field Programmable Gate Arrays (FPGAs), program and application specific integrated circuits (PASIC/ASIC), program and application specific standard products (PSSP/ASSP), system on a chip (SOC), and Complex Programmable Logic Devices (CPLDs).
It should be understood that the storage subsystem 76 includes one or more physical, non-transitory devices. However, in some embodiments, aspects of the instructions described herein may propagate in a transient manner through a pure signal, e.g., an electromagnetic signal or an optical signal, etc., that a physical device does not hold for a finite duration. Moreover, data and/or other forms of information pertaining to the present disclosure may be propagated by a mere signal.
The term "program" may be used to describe an aspect of computing system 24' that is implemented to perform particular functions. In some cases, the program may be instantiated via logic subsystem 74 executing instructions held by storage subsystem 76. It should be appreciated that different programs may be instantiated by the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term "program" may include individual executable files, data files, libraries, drivers, scripts, database records, and the like, or collections thereof.
It should be understood that a "service," as used herein, is an application that is executable across multiple user sessions. The services may be available to one or more system components, programs, and/or other services. In some implementations, the service may run on one or more server computing devices.
When included, display subsystem 20' may be used to present a visual representation of the data held by storage subsystem 76. The visual representation may take the form of a Graphical User Interface (GUI). Because the herein described methods and steps change the data held by the storage subsystem, and thus transform the state of the storage subsystem, the state of display subsystem 20' may likewise be transformed to visually represent changes in the underlying data. Display subsystem 20' may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined in a shared enclosure with logic subsystem 74 and/or storage subsystem 76, or such display devices may be peripheral touch display devices.
The input subsystem 22', when included, may include or be interconnected with one or more user input devices, such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may include or be integrated with selected Natural User Interface (NUI) components. Such components may be integrated or peripheral and the conversion and/or processing of input actions may be processed on-board or off-board. Examples of NUI components may include a microphone for speech and/or voice recognition; infrared, color, ultrasound, and/or depth cameras for machine vision and/or gesture recognition; a head tracker, glasses tracker, accelerometer and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing quantity component for assessing brain activity.
The communication subsystem 78, when included, may be configured to communicatively couple the computing system 24' with one or more other computing devices. The communication subsystem 78 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As a non-limiting example, the communication subsystem may be configured for communication via a wireless telephone network or a wired or wireless local or wide area network. In some embodiments, the communication subsystem may allow computing system 24' to send and/or receive messages to and/or from other devices over a network such as the internet.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.

Claims (10)

1. In a see-through display device (16), a method (52) for improving a color resolving power of a user (12) of the see-through display device based on a color vision deficiency of the user, the method comprising:
constructing (62) a virtual image to be superimposed on a real image (36) visible through the see-through display device, the virtual image being configured to emphasize sites of poorly resolved colors in the real image based on the color vision deficiency; and
displaying (66) the virtual image such that the virtual image is superimposed on the real image in registration with the real image in a field of view of the see-through reality device.
2. The method of claim 1, wherein the virtual image is configured to transform a color of the site.
3. The method of claim 1, wherein the virtual image is configured to increase a brightness of the site.
4. The method of claim 1, wherein the virtual image is configured to delineate a perimeter of the site.
5. The method of claim 1, wherein the virtual image is configured to overwrite the locus with text.
6. The method of claim 1, wherein the virtual image is configured to overwrite the site with one or more of a symbol and a segmentation map.
7. The method of claim 1, wherein the virtual image is configured to write one or more of text and symbols near the site.
8. A see-through display device (16) configured to improve color resolution of a user (12) having color vision deficiency, the see-through display device comprising:
a forward facing camera (44) configured to obtain an image of a real image (36) visible through the see-through display device;
a logic subsystem (74) operatively coupled to a storage subsystem (76), the storage subsystem storing instructions that cause the logic subsystem to construct a virtual image to be superimposed on the real image, the virtual image configured to emphasize sites of poorly resolved colors in the real image based on the color vision deficiency; and
a microdisplay (20) configured to display the virtual image such that the virtual image is superimposed on the real image in registration with the real image in a field of view of the see-through reality device.
9. The device of claim 8, further comprising an eye tracker configured to track a focus point of the user, wherein the virtual image is configured to emphasize the location to a greater extent when the location is closer to the focus point than when the location is farther from the focus point.
10. The apparatus of claim 8, further comprising a controllable dimming filter configured to reduce brightness of a real image seen by the user by control.
HK14108736.6A 2014-08-27 Color vision deficit correction HK1195383A (en)

Publications (1)

Publication Number Publication Date
HK1195383A true HK1195383A (en) 2014-11-07

Family

ID=

Similar Documents

Publication Publication Date Title
US9398844B2 (en) Color vision deficit correction
EP2886039B1 (en) Method and see-thru display device for color vision deficit correction
JP7594060B2 (en) Depth-Based Foveated Rendering for Display Systems
EP3108292B1 (en) Stereoscopic display responsive to focal-point shift
JP7096836B2 (en) Depth-based foveal rendering for display systems
CN107376349B (en) Occluded virtual image display
TWI516802B (en) Near-eye optical deconvolution displays
US9430055B2 (en) Depth of field control for see-thru display
TWI565971B (en) Near-eye microlens array displays
TWI516803B (en) Near-eye parallax barrier displays
US20150312558A1 (en) Stereoscopic rendering to eye positions
US20170287221A1 (en) Virtual cues for augmented-reality pose alignment
EP3714313B1 (en) Mitigating binocular rivalry in near-eye displays
US11237413B1 (en) Multi-focal display based on polarization switches and geometric phase lenses
JP2021021889A (en) Display device and method for display
US10706600B1 (en) Head-mounted display devices with transparent display panels for color deficient user
CN103778602A (en) Color vision defect correction
US11493766B2 (en) Method and system for controlling transparency of a displaying device
US20180158390A1 (en) Digital image modification
US12242063B2 (en) Vertical misalignment correction in binocular display systems
CN107111136A (en) Binocular device including a monocular display device
US11874469B2 (en) Holographic imaging system
US11347060B2 (en) Device and method of controlling device
US11257468B2 (en) User-mountable extended reality (XR) device
HK1195383A (en) Color vision deficit correction