[go: up one dir, main page]

HK1191701B - Hierarchical, zoomable presentations of media sets - Google Patents

Hierarchical, zoomable presentations of media sets Download PDF

Info

Publication number
HK1191701B
HK1191701B HK14104785.5A HK14104785A HK1191701B HK 1191701 B HK1191701 B HK 1191701B HK 14104785 A HK14104785 A HK 14104785A HK 1191701 B HK1191701 B HK 1191701B
Authority
HK
Hong Kong
Prior art keywords
media
media object
score
collection
presentation
Prior art date
Application number
HK14104785.5A
Other languages
Chinese (zh)
Other versions
HK1191701A (en
Inventor
Sander Martijn Viegers
Daniel Rosenstein
Original Assignee
Microsoft Technology Licensing, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Publication of HK1191701A publication Critical patent/HK1191701A/en
Publication of HK1191701B publication Critical patent/HK1191701B/en

Links

Description

Hierarchical, scalable presentation of media collections
Background
Within the field of computing, many scenarios involve the presentation of a media collection, which includes a collection of media objects, such as still images, videos, audio recordings, documents or multimedia objects, or some mix of these media object types. The corresponding media objects may be generated by the user for whom the media collection is presented (e.g., a collection of photographs including photographs taken by the user), may be generated by other users and collected into the media collection by the user (e.g., photographs posted by friends of the user on a social network), and/or may be obtained by the user from a media library (e.g., purchased from a media store).
The presentation of the media object may take many forms. The user may also generate a presentation, such as a collage of the collected images that is physically arranged by the user in a desired manner, or a slideshow that includes a sequence of images and in an order selected by the user. Alternatively, the means for storing or accessing the images may automatically generate and present various views of the media object, such as a timed sequence comprising a slideshow, or a collection of preview versions of corresponding media objects, such as a reduced-size "thumbnail" version of the image, portions of an audio recording, or a leading abstract of the document.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Many presentation types of media collections can be problematic. As a first example, if the number of media objects in a media collection is large, the amount of time spent for automated presentation of the media objects as a sequential slideshow may be unacceptable, and the presentation as a collection of thumbnail images may be cumbersome to browse. Furthermore, a large number of media objects in a media collection may be uninteresting or redundant; a user of a digital camera, for example, may capture hundreds or thousands of images while on vacation, but many images may be of poor quality (such as underexposed, overexposed, out-of-focus, and blocked images) and many others may be duplicate images of the same subject under the same settings. It may not be desirable to present all images to the user.
The user may generate a media presentation of the media object (e.g., by selecting important images and creating a collage or album), thereby improving the selectivity, quality, and narrative context of the media presentation. Many of the techniques used to assist users in creating media presentations can be very time consuming; for example, the user may have to explicitly specify the media objects and the order of the media objects to be included in the presentation, as well as the order, size, and location of the media objects within the layout. Thus, these techniques are inefficient and laborious for the user to give the way to create a media collection.
Presented herein are techniques for generating a media presentation of a media collection. According to these techniques, each media object may be assigned a score, for example, between 1 and 10, to indicate the importance of the media object within the media collection. These scores may be generated by the user (e.g., by the user's score selection for the corresponding media object, or by simple user interaction with the media collection, such as assigning a higher score to a media object that the user selects to view, spends more time to view, or is shared with friends). Alternatively or additionally, a score for the media object may be automatically generated (e.g., image evaluation may be applied to a set of images to identify the visual quality of each image, such as sharpness, focus, and subject centering, and images with higher visual quality may be assigned higher scores).
A scalable media presentation may then be generated, wherein a low zoom level is selected in an initial state and media objects having a high score within the media collection are presented within the scalable media presentation. When a request is received for enlarged display of a media presentation near a particular media object, other media objects related to the enlarged displayed media object but having a lower score than the enlarged displayed media object (e.g., other images captured on the same day, captured at the same location, or depicting the same subject for an aggregate of images) may be selected and inserted into the scalable media presentation near the enlarged displayed media object. Further, the size of a corresponding media object may be adjusted not only according to the zoom level within the scalable media presentation, but also according to the score of the media object. For example, a media presentation of one image set may first present a low zoom level, including only the images with the highest scores within the media set. When the user selects to zoom in to display a particular image, the zoom state of the scalable media presentation may transition to a higher zoom level near the image, and an image associated with the particular image and having a medium score may be inserted within the media presentation near the selected image. Further magnified display of any of these images may result in the insertion (near the selected image) of additional images from the set of images that are associated with the magnified displayed image and have low scores. Conversely, zooming out of the display may result in resizing down images with lower scores among the currently presented images, and may result in their removal from the scalable media presentation.
In this manner, the media presentation may initially present the media object with the highest score among the media collection, and the zoom level and position may be understood as being directed to "drill down" into the media collection to present more media objects (with lower scores) related to the media object being displayed in magnification. Furthermore, a hierarchical presentation of media objects may be achieved with reduced or even no user involvement; for example, the user need not specify the layout and order of the media objects within the media presentation, but may simply interact with the media collection, and the user interaction may be monitored and interpreted to indicate the relative importance of the media objects in the media collection.
To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and implementations. These aspects and implementations are indicative of but a few of the various ways in which one or more aspects may be employed. Other aspects, advantages and novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the drawings.
Drawings
FIG. 1 is an illustration of an exemplary scenario involving a media collection presented to a user as a slideshow or thumbnail collection.
FIG. 2 is an illustration of an exemplary scenario involving a media collection designed by a user as a collage.
FIG. 3 is an illustration of an exemplary scenario involving identification of ratings by a user for corresponding media objects of a media collection.
FIG. 4 is an illustration of an exemplary scenario involving a scalable media presentation of media objects in accordance with the techniques presented herein.
FIG. 5 is a flow chart illustrating an exemplary method of presenting a media collection including at least one media object in accordance with the techniques presented herein.
FIG. 6 is a flow diagram illustrating an exemplary method of generating a media presentation of a media collection including at least one media object in accordance with the techniques presented herein.
FIG. 7 is an illustration of an exemplary computer-readable medium comprising processor-executable instructions configured to embody one or more of the provisions set forth herein.
FIG. 8 is an illustration of an exemplary scenario involving one effect of a zoom operation within a scalable media presentation of a media collection.
FIG. 9 is an illustration of an exemplary scenario involving another effect of a zoom operation within a scalable media presentation of a media collection.
FIG. 10 is an illustration of an exemplary scenario involving one effect of a zoom operation within a scalable media presentation of a media collection comprising video clips.
FIG. 11 is an illustration of an exemplary scenario involving setting media objects of a media collection utilizing a media collection context of the media collection.
FIG. 12 is an illustration of an exemplary scenario involving setting media objects of a media collection with two axes representing different attributes of the media objects.
FIG. 13 illustrates an exemplary computing environment in which one or more of the provisions set forth herein may be implemented.
Detailed Description
The claimed subject matter is described below with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, various structures and devices are shown in block diagram form in order to facilitate describing the claimed subject matter.
A. Introduction to
Within the field of computing, many scenarios involve a media collection comprising one or more media objects that may be presented to a user. The media collection may include, for example: images such as photographs or drawings; animation or video recording of a real-world or virtual environment; audio recordings of music, speech or ambient sounds; documents such as text, pictorial literature, newspapers or caricatures; mixed media objects such as audio-visual recordings or documents with embedded animations; or a hybrid collection comprising objects of various types. These objects may be created, for example, by a user (e.g., a photograph taken during travel); the media objects are created by acquaintances of a user and selected by the user to create an aggregate (e.g., photos captured by other users and shared with the user through a social media network or photo sharing service); or by a separate service that delivers the media object to the user (e.g., a library of images from which the user obtains a subset of the images).
In such a scenario, the user may request to view the presentation of the media collection in a variety of ways. As a first example, the media objects of a media collection, such as a montage, may be presented in an ordered or arbitrary (e.g., randomized) sequence, or presented in a simultaneous manner, such as a collage. As a second example, a media collection may be organized with user input, such as a user-designed album, or may be automatically generated by various criteria. As a third example, the media objects may be presented in a non-interactive manner (e.g., a collection of still images) or an interactive manner (e.g., a slideshow that a user may navigate in a desired order at a desired rate).
FIG. 1 shows an exemplary scenario 100 involving a media collection 102 that includes various media objects 104 (illustrated here as images) to be presented to a user 106. Many types of media presentations may be given to the user 106 from the media collection 102. The exemplary scenario 100 of fig. 1 gives some examples of automatically generated media presentations that may be automatically generated from a media collection 102 by some device (e.g., a workstation, server, tablet, smartphone, or camera) without user 106 involvement. As a first example, the media collection 102 may be presented as a slideshow 108 that includes a sequence of images presented over a short period of time. The slides 108 may be ordered in a number of ways (e.g., chronologically by file creation time, or alphabetically by file name), and the user may passively view the slides 108 or may choose to flip through the images at a desired rate. As a second example, the media collection 102 may be presented as a thumbnail collection 110 that includes a collection of thumbnail versions 112 of the images of the media collection 102, such as a reduced-size version indicating the content of the corresponding images when viewed at full resolution. The user 106 may be allowed to navigate through the thumbnail collection 110 and view any images at full resolution by selecting the corresponding thumbnail version 112.
Although the automatically generated media presentations in the exemplary scenario 100 of FIG. 1 may allow the user 106 to review the content of the media collection 102, these exemplary automatically generated media presentations may pose some difficulty to the user 106, particularly for larger media collections 102. For example, the media collection 102 in the exemplary scenario 100 of FIG. 1 includes 1352 images, which may be cumbersome or burdensome to review for the user 106. Furthermore, it is possible that only a subset of the media objects 104 in the media collection 102 are of particular interest to or relevant to the user 106. For example, the first four images in the media collection 102 may depict the same scene, involving two people standing near a body of water on a sunny day. The first image may be of interest to the user 106 and may comprise a better version of the scene than the second image (which may be oblique), the third image (which may be blurred) and the fourth image (which may not exhibit significant imperfections but may simply be redundant with the first image). The media collection 102 may include many such insufficient or redundant media objects 104, and thus it may be undesirable to present the entire media collection 102 to the user 106. For example, even for an image period of 5 seconds, the slideshow 108 of the entire media collection 102 may have a duration of approximately two hours; and the thumbnail collection 112 of the media collection 102 may include 1352 thumbnail versions 112 and thus may be burdensome to the user 106.
The selectivity in presenting the media collection 102 may be improved by the participation of the user 106. For example, the user 106 may explicitly define a subset of the media objects 104 to be included in the media presentation. A media presentation design tool may be provided for the user 106 to provide assistance for this task, such as a media album generation utility, which may allow the user 106 to select some media objects 104 from among the media collection 102, specify settings (such as order) among the subset of media objects 104, and generate an album compilation (e.g., audio disc or photo disc) of the selected media objects 104.
FIG. 2 presents an exemplary scenario 200 involving such a user-generated presentation of a media collection 102 in the form of a collage 202. For example, the user 106 may generate one or more collage pages, each page including a selection of images that are titled, sized, and positioned as desired by the user 106. The collage 202 may thus present an organization desired by the user 106, such as a summary, a subject presentation, or a narrative, which may provide semantic context for the selected media object 104 and the media collection 102. The generation of the collage 202 may involve a great deal of attention from the user 106, including a great deal of screening of the media collection 102, for example, to remove substandard images and to compare and select between redundant images. Particularly for larger media collections 102 (e.g., attempting to select and set up an album from among 1352 images comprising the media collection 102), the user 106 may not have the interest or ability to give such attention to the generation of the collage 202.
B. Given the technology
Presented herein are techniques to facilitate generating a media collection 102 that can reduce complexity and direct attention of a user 106 in generating an appropriate media presentation of the media collection 102. In accordance with these techniques, a score may be identified for a corresponding media object 104 of the media collection 102 that indicates a quality, relevance, and/or interest level of the user 106 of the media object 104, for example, in view of other media objects 104 of the media collection 102. These scores may be explicitly identified by the user 106; may be identified based on the activity of the user 106 (e.g., the amount of time it takes the user 106 to view each image); and/or may be automatically identified (e.g., an image quality assessment algorithm applied to estimate the quality of corresponding images of the media collection 102). Further, corresponding media objects 104 may be identified as having a relationship with other media objects 104 of the media collection 102, such as a first image captured on the same day as a subset of the other images within the media collection 102, or the first image depicting the same location or subject as a subset of the other images within the media collection 102. These associations may also be explicitly identified by the user 106 (e.g., explicitly grouping images in different folders of the file system); are implicitly identified based on the actions of the user 106 (e.g., naming or tagging each image to indicate a subject depicted in each image, and comparing the names or tags to identify images depicting the same subject); and/or automatically identified (e.g., using facial recognition algorithms to identify the persons depicted in each image).
In accordance with these techniques, media collection 102 may be presented as a scalable media presentation, where user 106 may select to zoom in and out of the display media presentation in order to view different levels of detail. Further, in addition to allowing the user 106 to view more or less details of a particular media object 104 corresponding to the media collection 102, the zoom state of the media presentation may be used as a "drill-down" metaphor for viewing more or less details corresponding to a particular portion of the media collection 102. For example, the media collection 102 may initially be presented at a low zoom level and only the media objects 104 of the media collection 102 having a high score may be initially presented. If the user 106 selects a different zoom state near a particular media object 104 (e.g., zoomed-in to a higher zoom level at a particular location within the zoomable media presentation), the zoomable media presentation may insert one or more additional media objects 104 (which are captured or depict the same subject, for example, on the same day) associated with the zoomed media object 104 near the zoomed media object 104, but which score lower than the zoomed media object 104. In addition, the size of the media objects 104 may be adjusted according to the scores and zoom levels of the media objects 104; for example, at a particular zoom level, media objects 104 with a high score may appear at a larger size, media objects 104 with a medium score may appear at a medium size, and media objects 104 with a low score may appear at a smaller size (or may be hidden until the user 106 transitions to an even higher zoom state near these media objects 104). In this manner, the zoom level of the zoomable media presentation may be interpreted as a request by the user 106 to view more media objects 104 of the media collection 102 associated with the zoomed media object 104. Thus, the media collection 102 is presented as a hierarchical structure that initially shows only a small subset of the media objects 104 in the media collection 102 that have the highest scores, but other media objects 104 are readily accessible using familiar zoom operations as a contextual "drill-down" metaphor.
Fig. 3-4 together present an exemplary scenario involving a media presentation of a media collection in accordance with the techniques presented herein. In the exemplary scenario 300 of FIG. 3, a user 106 may access a media collection 102 that includes 1352 media objects 104, which the user 106 may wish to view. The user 106 is allowed to identify a score 302 for the corresponding media object 104 on a 0 to 5 star scale, where a 5 star score indicates images of high quality, high relevance, or high interest to the user 106, and a 1 star score 302 indicates images of low quality, low relevance, or low interest to the user 106. The user 106 may explicitly score some or all of these images of the media collection 102. For example, among the first three images, the user 106 may assign a 4-star score 302 for the first image that is an attractive representation of the scene; a second image depicting the same scene but with a skewed orientation may be assigned a 2-star score 302; and a third image depicting the same scene but out of focus may be assigned a 1-star score 302. Alternatively or additionally, the device presenting the media collection 102 to the user 106 may monitor the user's 106 interaction 308 with the media object 104 and may infer the score 302 based thereon. For example, while viewing the media collection 102, the user 106 may select a particular media object 104; a particular media object 104 may be viewed for a long period of time; the size of a particular media object 104 may be adjusted (e.g., media object 104 enlarged for viewing in greater detail, or media object 104 reduced in size); and/or may share the media object 104 with another user 106 (e.g., send a message 310 with the media object 104 to a friend 312). From such interactions 308, the appliance may infer a score 302 corresponding to media object 104 (e.g., identify a higher score 302 for a first media object 104 that is viewed by user 106 for a longer period of time than a second media object 104 having a lower score 302, identify a higher score 302 for an enlarged image selected by user 106, and simultaneously identify a lower score 302 for a contracted or hidden image selected by user 106). Further, the appliance may identify one or more associations between media objects 104 (e.g., media objects 104 created on the same day, presenting similar subjects, or organized together by the user 106).
Such scoring 302 and associations may be used to generate a scalable media presentation of the media collection 102, where the zoom level may be adjusted to "drill down" to different levels of detail within the media collection 102, in accordance with the techniques presented herein. FIG. 4 presents an exemplary scenario 400 involving a scalable media presentation 402 in various states. In a first state 406 (e.g., an initial state), the scalable media presentation 402 may be presented at a low zoom level 404, wherein only media objects 104 of the media set 102 having a relatively high score 302 are involved (e.g., media objects 104 having a score in the top 10% of the media set 102, or media objects 104 having 4 or 5 stars). In the second state 408, a zoom-in display operation 410 (e.g., provided by the user 106 or specified by an application) may be detected that requests a higher zoom level 404 at a particular location in the scalable media presentation 402. Furthermore, the location may be in the vicinity of a media object 104 having a high score 302. In accordance with the techniques presented herein, in the third state 412, the scalable media presentation 402 may be presented at a higher zoom level 404 at the zoomed media object 414, and a second media object 414 having a medium score 302 and associated with the zoomed media object 414 may be presented in the scalable media presentation 402. An additional zoom operation 410 near the second media object 414 may result in a fourth state 416 being presented at the high zoom level 404 of the scalable media presentation 402, where a third media object 104 having a low score 302 and associated with the second media object 104 is presented near the second media object 104. The zoom out display operation 418 in this fourth state 416 may result in a return to the third state 416, including the optional removal of media objects 104 having low scores 302. Further, the size of the corresponding media object 104 may be adjusted within the scalable media presentation 402 according to the zoom level 404 and the corresponding score 302 of the media object 104 (e.g., media objects 104 with higher scores 302 may be adjusted to appear larger and media objects 104 with lower scores 302 may be adjusted to appear smaller). In this manner, the scalable media presentation 402 of the media collection 102 may allow the user 106 to interact with the media collection 102 in a hierarchical manner using familiar "zoom" operations in accordance with the techniques presented herein.
C. Exemplary embodiments
FIG. 5 presents a first embodiment of these techniques, illustrated as an exemplary method 500 of presenting a media collection 102 that includes at least one media object 104. Exemplary method 500 may include, for example, a set of processor-executable instructions that, when executed on a processor of a device, cause the device to present media collection 102 in accordance with the techniques presented herein. The exemplary method 500 begins at 502 and involves sending 504 the instruction to a device. In particular, the instructions are configured to identify 506 a score 302 within the media collection 102 for a corresponding media object 104. The instructions are further configured to: upon receiving a request to present a media presentation, the scalable media presentation 402 is presented 508 at a low zoom level that includes media objects 104 with high scores 302 (and does not include media objects 104 with lower scores 302 of the media collection 102 at the low zoom level). The instructions are further configured to: upon receiving a request to zoom the scalable media presentation 402 in the vicinity of the zoomed media object 414, media objects 104 associated with the zoomed media object 414 and having scores 302 lower than the zoomed media object 414 are inserted 510 in the vicinity of the zoomed media object 414. In this manner, the configuration of the instructions executing on the processor causes the device to present the scalable media presentation 402 of the media collection 102 in accordance with the techniques presented herein, and thus the exemplary method 500 ends 512.
FIG. 6 shows a second embodiment of these techniques, illustrated as an exemplary method 600 of generating a media presentation of a media collection 102 comprising at least one media object 104. The example method 600 may include, for example, a set of processor-executable instructions stored in a memory component (e.g., a memory circuit, a platter of a hard disk drive, a solid state storage device, or a magnetic or optical disk) of a device having a processor that, when executed, causes the device to render the media collection 102 in accordance with the techniques presented herein. The exemplary method 600 begins at 602 and involves executing 604 the instructions on a processor of the device. In particular, the instructions are configured to identify 606 a score 302 for a corresponding media object 104 within the media collection 102 for that media object 104. The instructions are further configured to present 608 the scalable media presentation 402 at the low zoom level 404, wherein the scalable media presentation 402 (at the low zoom level 404) includes the media objects 104 of the media set 102 having the (higher) scores 302 (that is, does not include the media objects 104 of the media set 102 having the lower scores 302 at the low zoom level). The instructions are further configured to: upon transitioning 610 to a zoom state near zoomed media object 414, presenting media object 104 associated with zoomed media object 414 and having a score 302 lower than zoomed media object 414 near zoomed media object 414; and adjusting 614 the size of the corresponding media object 104 based on the zoom state and the score 302 of the media object 104. In this manner, when executed on a processor, the configuration of instructions sent to the device causes the device to generate a scalable media presentation 402 of the media collection 102 in accordance with the techniques presented herein, and thus the exemplary method 600 ends 616.
Another embodiment relates to a computer-readable medium comprising processor-executable instructions configured to apply the techniques presented herein. Such computer-readable media may include, for example, computer-readable storage media involving a tangible device, such as a memory semiconductor (e.g., a semiconductor utilizing Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), and/or Synchronous Dynamic Random Access Memory (SDRAM) technology), a platter of a hard disk drive, a flash memory device, or a magnetic or optical disk (such as a CD-R, DVD-R or floppy disk) encoding a set of computer-readable instructions that, when executed by a processor of the device, cause the device to implement the techniques presented herein. Such computer-readable media may also include various types of communication media (as a type of technology other than computer-readable storage media), such as signals that may propagate through various physical phenomena, such as electromagnetic, acoustic, or optical signals, as well as in various wired situations, such as through ethernet or fiber optic cables, and/or wireless situations, such as Wireless Local Area Networks (WLANs) such as WiFi, Personal Area Networks (PANs) such as Bluetooth, or cellular or radio networks, that encode a set of computer-readable instructions that, when executed by a processor of a device, cause the device to implement the techniques presented herein.
FIG. 7 provides an illustration of a third embodiment of these techniques, illustrated as a computer-readable medium 700 (e.g., a CD-R, DVD-R or a platter of a hard disk drive), having computer-readable data 704 encoded thereon. This computer-readable data 704 in turn comprises a set of computer instructions 706 configured to cause a device 712 to operate according to the principles set forth herein when executed on a processor 710 of the device. In such an embodiment, the processor-executable instructions 706 may be configured to perform a method 708 of presenting a media collection 102 including at least one media object 104, such as the exemplary method 500 of fig. 5. In another such embodiment, the processor-executable instructions 706 may be configured to implement a method 708 of generating a media presentation of the media collection 102 including at least one media object 104, such as the exemplary method 60 of fig. 6. Some embodiments of the computer-readable medium may include a non-transitory computer-readable storage medium (e.g., a hard disk drive, an optical disk, or a flash memory device) configured to store processor-executable instructions configured in this manner. Many such computer-readable media may be devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
D. Variants
Variations of the techniques discussed herein are contemplated in many respects, and some variations may present additional advantages and/or reduce disadvantages relative to other variations of these and other techniques. Moreover, some variations may be implemented in combination, and some combinations may have additional advantages and/or reduce disadvantages through synergistic cooperation. The variations may be incorporated into various embodiments (e.g., exemplary method 500 of fig. 5 and exemplary method 600 of fig. 6) to confer individual and/or synergistic advantages upon such embodiments.
D (1) in each case
A first aspect that may vary among embodiments of these techniques relates to the various scenarios wherein such techniques may be utilized. As a first variation of this first aspect, these techniques may be implemented on many types of devices, including a client device configured to present a presentation of the media collection 102, or a server configured to present a presentation to be presented as a presentation on another device (e.g., a web server that generates the presentation as a web page for presentation on a web browser of the client device). Such devices may include, for example, workstations, servers, laptops, tablets and/or palmtop computers, mobile phones, media players, game consoles, televisions, still and motion cameras, Personal Data Assistants (PDAs), and Global Positioning System (GPS) receivers. Such devices may also receive input from a user in a number of ways, such as a keyboard, a pointing device, e.g., a mouse, touch input, gestures, visual input (e.g., a motion camera configured to recognize a body gesture of the user), and voice input, and may provide output to the user in a number of ways, including a display assembly, a speaker, and a haptic device. Further, the devices may present the media collection 102 stored locally on the same device, the media collection 102 stored on another device available locally (e.g., a file server provided on the same network), or the media collection 102 stored on a remote server of the media collection 102.
As a second variation of this first aspect, these techniques may be used for many types of media collections 102, such as collections of images (photographs, drawings or drawings), video recordings (e.g., animations or captures of a real or virtual environment to be encountered), audio recordings (e.g., captures of real or synthetic speech, music or environmental sounds), and/or documents (e.g., text, pictorial writings, newspapers or caricatures). The media collection 102 may also include one or more mixed media objects 104 (e.g., documents embedded with audio recordings), and may include different types of media objects 104. The media collection 102 and/or media object 104 may also be protected or may not be restricted by Digital Rights Management (DRM) technology and/or various licensing restrictions.
As a third variation of this first aspect, a number of types of scores 302 may be identified for corresponding media objects 104. For example, a spectrum or scale may be established for the media collection 102, and the score 302 of a corresponding media object 104 may identify the location of that media object 104 within the spectrum or scale (e.g., a score of 1 to 10 or a number of stars). Or the score 302 may be arbitrarily identified, for example, as an unlimited number of points per media object 104, such as the number of seconds it takes for the user 106 to consume each media object 104. As another alternative, the score 302 of the first media object 104 may be relative to the second media object 104 of the media collection 102; for example, the various media objects 104 may be organized into a structure, such as a list or tree, that indicates the relative relevance or interest of each media object 104 with respect to the other media objects 104 of the media collection 102, and the score 302 may include an indication of the media objects 104 within the structure.
As a fourth variation of this first aspect, the media collection 102 may be organized in a number of ways. For example, the media objects 104 may be presented as an arbitrary aggregate, such as an unordered aggregate; as an ordered list, such as a collection of media objects 104 having sequentially numbered filenames or other identifiers; or as a hierarchical structure represented in many ways, such as a set of relationships in a database or the location of corresponding media objects 104 within a hierarchical organization, e.g., a tree or a file system with a hierarchical structure. This organization may be utilized in many aspects of these techniques (e.g., to indicate associations between various media objects 104, such as associations of media objects 104 grouped together in a folder of a hierarchical file system, or scores 302 identifying corresponding media objects 104). Alternatively or additionally, the first media collection 102 may contain one media object 104 that is actually the second media collection 102, such that zooming in on that media object 104 shows the media object 104 with the high score 302 that first renders the second media collection 102, and further zooming in on media objects 104 near the contained one media object 104 shows other media objects 104 near the zoomed media object 414 that have a lower score 302 than the zoomed media object 414. Those skilled in the art can envision many scenarios in which the presently disclosed techniques can be utilized.
D (2) identifying media collection scores and associations
A second aspect that may vary among embodiments of these techniques relates to the method of identifying a scalable media presentation 402 to implement the media collection 102, including the scores 302 of the media objects 104 and the associations therebetween. As a first variation, the scores 302 of the corresponding media objects 104 of the media collection 102 may be identified by the user 106, and the means 712 may be configured to receive the scores 302 of the corresponding media objects 104 from the user 106, store the scores 302, and utilize the scores 302 in generating the scalable media presentation 402 of the media collection 102. For example, as shown in the exemplary scenario 300 of FIG. 3, the media objects 104 of the media collection 102 may be presented with a visual control that allows the user 106 to select a score 302 for the corresponding media object 104. Or a score 302 of the corresponding media object 104 may be inferred, such as by presenting one or more media objects 104 to the user 106 and monitoring the user 106 for interactions 308 with the media object 104. For example, a high score 302 may be identified for a media object 104 that the user 106 chooses to review, review for a longer time, resize (e.g., expand to a larger version of the media object 104), save, bookmark, mark as relevant, and/or share with friends 312 or other users. Conversely, a low score 302 may be identified for a media object 104 that the user 106 chooses to ignore, merely briefly review, resize (e.g., reduce to a smaller version of the media object 104), mark as irrelevant, and/or delete from the aggregate. The scores 302 may also be explicitly identified or inferred from interactions 308 of users 106 other than the user 106 for which the scalable media presentation 402 is presented; for example, within a social media network, individual users 106 may identify the scores of corresponding media objects 104, and the collective scores 302 of the media objects 104 may be used to generate a scalable media presentation 402 of the media objects 104. The presentation of the media collection 102 for the user 106 may also change in response to the scores 302 assigned by the user 106; for example, media objects 104 assigned a higher score 302 may be resized in the presentation to the user 106 and media objects 104 assigned a lower score 302 may be resized in the presentation to the user 106.
As a second variation of this second aspect, the scoring 302 of the corresponding media object 104 of the media collection 102 is given in an automated manner, e.g., not directly related to the attention of the user 106 to the media object 104 of the media collection 102, but based on the attributes of the corresponding media object 104. As a first example of this second variation, a media object quality may be estimated for a corresponding media object 104, and a score 302 proportional to the estimated quality of the media object 104 may be selected for the corresponding media object 104. For example, for a media object 104 that includes an image, image evaluation techniques may be utilized to estimate the image quality (e.g., sharpness, focus, contrast, and bearing) of the image, and a score 302 may be identified that is proportional to the estimated quality of the image.
As a second example of this second variation, a corresponding media object 104 may relate to one or more subjects that may be important to the user 106, and among the subjects associated with the media object 104, a score 302 for the media object 104 may be selected that is proportional to the importance of the subject to the user 106. For example, the user 106 may have relationships with individuals in the social network, some of which are closer (e.g., himself or herself, family members, close friends), others of which are more common (e.g., more distant friends), and others of which are more distant (e.g., guests). In a media object 104, such as an image depicting one or more individuals, biometrics may be utilized to identify individuals depicted in the image, and a score 302 for the image may be selected based on the user's 106 interest in the depicted individuals.
As a third example of this second variation, an organizational structure (e.g., a hierarchical structure) of the media collection 102 may be utilized to identify the scores 302 of the corresponding media objects 104 of the media collection 102. For example, the score 302 of a corresponding object 104 may be selected relative to the hierarchical position of the media object 104 within the hierarchical structure (e.g., for a media collection 102 stored within a portion of the file system, media objects 104 closer to the root of the file system may be assigned a high score 302, and media objects 104 deep within the hierarchical structure may be assigned a low score 302). Alternatively or additionally, for a particular grouping of media objects 104 (e.g., media objects 104 stored as individual files within the same folder of a file system hierarchy), a representative media object 104 of the grouping of media objects may be selected and a higher score 302 may be selected for the representative grouping of media objects as compared to other media objects 104 within the same grouping of media objects.
As a third variation of this second aspect, corresponding associations between various media objects 104 may be identified in a variety of ways (which may be used, for example, to select media objects 104 to be inserted in the scalable media presentation 402 near the scaled media object 414). For example, the association may be explicitly identified by the user 106, such as by a specified direct association between the media objects 104, or by a tag identifying a shared attribute of the associated media objects 104. Alternatively or additionally, associations may be automatically identified based on shared data or metadata attributes, such as media objects 104 created on the same date, media objects 104 of the same type, media objects 104 generated with the same device or by the same user 106, or media objects 104 stored at the same location of the file system. One of ordinary skill in the art may contemplate many ways of identifying the scores 302 and associations therebetween for the media objects 104 of the media collection 102 in implementing the techniques presented herein.
D (3) generating a scalable media presentation
A third aspect that may vary among embodiments of these techniques relates to generating a scalable media presentation 402 of the media collection 102. As a first variation of this third aspect, scalable media presentation 402 may be arbitrarily scalable; for example, user 106 may select any zoom level focused at any location within scalable media presentation 402. Or the scalable media presentation 402 may be presented as discretely scalable; for example, user 106 may only be able to view scalable media presentation 402 within a constrained set of zoom levels and/or positions.
As a second variation of this third aspect, different types of zoom mechanisms may be presented within the scalable media presentation 402. As a first example of this, various attributes of the corresponding media object 104 presented therein, including the size, quality, and amount of detail presented in the media object 104, may be altered by changing the zoom level of the scalable media presentation 402. For example, by zooming in to display a media object 104 representing a document may result in the presentation of a larger depiction of the document, involving a high quality rendering of the fonts used to depict the text of the document, and/or the presentation of more data about the document, or a longer summary of the document, and possibly zooming in to a depiction of the entire content of the document.
As a second example of this, the presentation of the media collection 102 may be altered in a variety of ways by inserting additional media objects 104 into the scalable media presentation 402 in response to a magnification display operation. FIG. 8 presents a first exemplary scenario 800 for one implementation involving a zoomable media presentation 402 that includes slides of the media collection 102. The slideshow may, for example, present a timed sequence of media objects 104 having a high score 302, optionally including a title and visual transitions between various elements, and allowing a user 106 viewing the slideshow to pause, speed up, slow down, or reorder or edit the media objects 104 of the slideshow. The slideshow may also allow a zoom operation to alter the slideshow sequence by inserting or removing additional media objects 104 associated with the zoomed media object 414 in accordance with the techniques presented herein. For example, in the first state 802 in the exemplary scenario 800 of fig. 8, a first media object 104 within the media collection 102 having a high score 302 may be presented for a brief duration. Without user input from the user 106, the slideshow may continue to a second state 804 in which the second media object 104 having a high score 302 is presented for a brief duration. If a magnification display operation 410 is detected within the scalable media presentation 402 during the second state 804, a third state 806 may present the scalable media presentation 402 depicting the scaled media object 402 at a larger size (the second media object 104 being displayed when the magnification display operation 410 is detected) and insert one or more media objects 104 associated with the scaled media object 402 and having a score 302 that is lower than the scaled media object 402. Additional magnified display operations 410 may result in the insertion of additional media objects 104 in accordance with the techniques presented herein. Conversely, a zoom out display operation 418 received during the third state 806 (or no user input from the user 106 for a specified period of time) may result in a fourth state 808 representing resetting the zoom level 404 to the initial zoom level and resuming presentation of the slide of the next media object 104 having a high score 302 in the sequence. In this manner, various zoomable aspects of the zoomable media presentation 402 may be used to add or remove details (in the form of inserting or removing additional media objects 104 with lower scores 302) for a slide presentation.
FIG. 9 presents a second exemplary scenario 900 for one implementation involving a scalable media presentation 402 of a media collection 102. In a first state 902 in this first exemplary scenario 900, a set of media objects 104 from among the media sets 102 is presented in the scalable media presentation 402 at a particular zoom level 404. When the magnified display operation 410 is detected proximate to the zoomed media object 404, the zoomable media presentation 402 begins to transition to the zoom level 404 indicated by the magnified display operation 401. For example, if the zoom level 404 is performed by a variable magnitude input (e.g., a mouse that may be rotated quickly or slowly, or a touchpad gesture that can be performed in a larger or smaller manner), the degree of transition may be associated with the input magnitude (e.g., a transition to a much higher zoom level 404 for a large magnitude input, and a transition to a slightly or moderately higher zoom level 404 for a small magnitude input). In this first exemplary scenario 900, the transition may pass through a first transition state 904 and a second transition state 806 before reaching a target state 908 representing the desired zoom level 404. Further, the transition state may present various depictions, such as smooth or stop motion animations, indicating going to the target state 908. For example, in the first transition state 904, media objects 104 that are not near the enlarged display operation 410 and that are not associated with the scaled media object 414 may be transitioned out of the scalable media presentation 402 (e.g., initiate a slide, fade, or shrink) while media objects 104 that are associated with the scaled media object 414 and that have a score 302 that is lower than the scaled media object 414 may be transitioned into the scalable media presentation 402 (e.g., slide into view from a boundary of the scalable media presentation 402). In addition, at least one dimension of the scaled media object 414 may be reduced to expose an exposed portion of the presentation space, and the newly inserted media object 104 may be positioned within the exposed portion of the presentation space. These transition animations may continue through the second transition state 906 and possibly other transition states until the target state 908 is reached, where zooming out of the display operation 418 may cause the transition to reverse back to the first state 902. Thus, the enlarged display operation 410 is utilized as a "drill-down" metaphor in order to present more media objects 104 in the media collection 102 that are associated with the zoomed media object 414. Although the zoomed media object 414 is actually resized relative to the first state 802 of the zoomable media presentation 402, the zoom metaphor is preserved in a number of ways (e.g., the zoom-in display operation 414 may be invoked by a familiar "zoom-in display" operation, such as a multi-touch divergent gesture or a mouse wheel-up operation; the reversibility of the zoom operation; and the proportional size adjustment of the background image to reflect the zoom level 404).
FIG. 10 presents a third exemplary scenario 1000 involving one implementation of a scalable media presentation 402 (such as a video presentation) comprising a media collection 102 of media streams, and some exemplary effects of a zoom operation thereon. In this second exemplary scenario 1000, media object 104 represents video clips from different dates (which, for example, depict events captured in different dates of a multi-day event), each having a score 302. For example, video clips 1-2 may be captured on the first day; video clips 3-5 may be captured the next day; and video clips 6-7 may be captured on the third day. In this exemplary scenario 1000, the media collection 102 is presented as a concatenation of selected video clips (which depict, for example, a summary of the multi-day event), and the scaling mechanism of the scalable media presentation 402 is implemented to adjust the amount of detail in a particular portion of the concatenation by including or removing video clips. For example, in the first state 1002, the scalable media presentation 402 may represent the media collection 102 as a concatenation of representative video clips for each day, such as the video clips from each day with the highest scores 302 (video clips 1, 3, and 7). But enlarged display operation 410 near the second video clip in the series (video clip 2) results in a second state 1004 that inserts additional video clips in the series that are not included in the first state 1002 that are the video clips from the same day as the zoomed video clip that have the highest score 302. Similarly, in second state 1004, further enlarged display operation 410 between video clips 3 and 5 results in the insertion between these two video clips of the video clip having the highest score 302 (i.e., video clip 4) among all the video clips between video clips 3 and 5 that are currently being inserted. Finally, in a third state 1006, further enlarged display operation 410 between video clips 5 and 7 results in the insertion between these two video clips of the video clip having the highest score 302 (i.e., video clip 6) of all the video clips between video clips 5 and 7 that are currently being inserted. Thus, the magnified display operation allows for the presentation of an extended video summary of the event, and in particular relates to adding detail from selected portions of the video summary in real-time without adding detail to other portions of the video summary. Conversely, zooming out on display operation 418 may allow for the removal of lower scoring video segments, resulting in a more concise video summary of the event's associated date.
As a third variation of this third aspect, the manner in which media objects 104 are inserted and/or removed from the scalable media presentation 402 may be accomplished unobtrusively, such as by quickly inserting media objects 104, such as by quickly expanding the video summary in the exemplary scenario 900 of FIG. 9. For example, in a depth-scalable media presentation 402, media objects 104 with low scores 302 may be included, but may be adjusted to a small size so as to be unrecognizable, indistinguishable, or completely hidden in the initial low-zoom level mode of the scalable media presentation 402 (e.g., at the lowest zoom level 404 of the scalable media presentation 402, the low-score media objects 104 may be adjusted down to one or two pixels that may be easily ignored, or may even be adjusted below one pixel and thus may not appear on the display of the device). Alternatively, an embodiment of these techniques may include only media objects 104 in scalable media presentation 402 that are adjusted above a minimum dimension threshold, and may omit from scalable media presentation 402 any media objects 104 that are adjusted below the minimum dimension threshold. Further, as the zoom level 404 of the scalable media presentation 402 changes, transitions may be utilized to indicate the addition or removal of media objects 104. For example, one embodiment may be configured to: upon transitioning to a higher zoom level 404 where a particular media object 104 is adjusted above a minimum scale threshold, transferring the media object 104 into the scalable media presentation 402; and/or roll media object 104 out of scalable media presentation 402 upon transitioning to a lower zoom level 404 where media object 104 is adjusted below a minimum scale threshold. Such a transition may, for example, depict the media object 104 as fading, popping, resizing, or sliding to a position within the scalable media presentation 402.
As a fourth example of this third aspect, where a scalable media presentation 402 at a particular zoom level 404 hides one or more media objects 104 of a media collection 102 from view, an embodiment of these techniques may include packaging in the scalable media presentation 402A zoom index is included that indicates the availability of one or more additional media objects 104 that are visible at the higher zoom level 404. For example, for a corresponding media object 104 associated with at least one hidden media object 104 that is adjusted below a minimum scale threshold, the embodiment may present a zoom indicator indicating the at least one hidden media object in the scalable media presentation 402 in the vicinity of that media object 104. The zoom index may be presented as a non-interactive visual index that indicates such availability and/or a zoom level 404 at which the additional media object 104 may become visible. Or the zoom indicator may be presented as an interactive control; upon detecting user 106 interaction with a zoom indicator, for example, the embodiment may transition the zoomable media presentation 402 to the higher zoom level 44, where at least one hidden media object is adjusted above a minimum scale threshold to be presented to the user 106. Further, the zoom indicator may indicate a current zoom level 404 of the scalable media presentation 402 and/or may includeAllowing user 106 to select scalable media presentation 402 Control of zoom level 404 (e.g., slider)
As a fourth example of this third aspect, the scalable media presentation 402 may arbitrarily position the media object 104 within the presentation space of the media collection 102 (e.g., within a window, pane, tab, control, or section in which the media collection 102 is presented). For example, the initially presented media objects 104 may have equal spacing within the presentation space, and may even float within the presentation space; and when a media object 104 is to be inserted into the scalable media presentation 402, the location of the inserted media object 104 may be arbitrarily selected (so long as the inserted media object 104 is in proximity to the scaled media object 414). Alternatively, one embodiment of these techniques may select the location of the corresponding media object 104 in order to achieve a particular setting of the media object 104. As a first example of this, scalable media presentation 402 may include a media collection context (such as a region), and representative media object 104 may be related to the media collection context (e.g., the geographic location of the corresponding object within the region). Thus, the presentation of the scalable media presentation 402 may include a contextual depiction of the context of the media collection (e.g., a map of the region), and the position of the media object 104 within the presentation space of the scalable media presentation 402 may be selected with respect to the position of the media object 104 with respect to the context of the media collection.
As a second example of this, the media objects 104 of the media collection 102 may be sorted according to a sorting criterion selected by the user 106 (e.g., creation date order, alphabetic name order, or scoring order). Arranging the media objects 104 within the scalable media presentation 402 may involve identifying an order of each media object 104 based on the sorting criteria and positioning the respective media objects within the presentation space based on the order of each media object 104. For example, the presentation space of the scalable media presentation 402 may include one or more axes, each axis representing a different ordered attribute of the media collection 102; and in addition to locating the media objects 104 of the media collection 102 with respect to associations therebetween, the individual objects 104 may be located along at least one axis based on attributes of the media objects 104. One skilled in the art may contemplate many aspects of the presentation of the scalable media presentation 402 in accordance with the techniques presented herein.
FIG. 11 provides an illustration of an exemplary scenario 1100 involving some features related to a scalable media presentation 402. In this exemplary scenario, the scalable media presentation 402 includes a media collection context, such as a region in which images that make up the media collection 102 are captured. Accordingly, in the first state 1104, the scalable media presentation 402 may present a depiction 1102 of a media collection context, such as a map of the region. In a second state 1106 (e.g., an initial state), the scalable media presentation 402 may present only three media objects 104 within the media collection 102 that have high scores 302; and one embodiment of these techniques may place the media objects 104 on the depiction 1102 according to the geographic coordinates of the images within the region. The size of the corresponding media object 104 is also adjusted according to the zoom level 404 of the scalable media presentation 402 and the score 302 of the media object 104. Further, in the second state 1106, the fourth media object 104 may be hidden from view (due to the lower score of the media object 104 and the current low zoom level 404 of the zoomable media presentation 402) and a zoom index 1108 may be presented to indicate the availability of the fourth media object 104 at the higher zoom level 404. When the user 106 selects the zoom index 1108, this embodiment may transition the scalable media presentation 402 to the higher zoom level 404 where the fourth media object 104 is visible. Further, upon reaching a higher zoom level 404 where the fourth media object 104 is viewable (e.g., where the fourth media object 104 is adjusted above a minimum scale threshold), the embodiment may transfer the fourth media object 104 into the scalable media presentation 402. For example, in the third zoom state 1110, the newly inserted fourth media object 104 appears smaller and translucent, but shortly thereafter (e.g., in the fourth state 1112), the newly inserted fourth media object 104 appears full-sized and non-transparent. In this manner, the exemplary scenario 1100 of FIG. 11 presents several variations of the third aspect described herein.
FIG. 12 provides an illustration of another exemplary scenario 1200 involving the placement of media objects 104 within a scalable media presentation 402 based on various attributes of the media objects 104. In this exemplary scenario 1200, the media collection 102 includes media objects 104 that are each associated with a day within a time period, and the scalable media presentation 402 includes a first axis 1204 that represents the days of the time period. The scalable media presentation 402 thus places the corresponding media objects 104 along the first axis 1204 according to the dates of the media objects 104. In the first state 1202, if a magnification display operation 410 is detected near the zoomed media object 414, the second state 1206 of the scalable media presentation 402 may insert two additional media objects 104 associated with the zoomed media object 414 and having a lower score than the zoomed media object 414 near the zoomed media object 414. Further, in second state 1206, scalable media presentation 414 may present a second axis 1208 representing the time of day depicted by media object 104; for example, images captured at earlier times of the day are positioned higher than images captured at later times of the day. This positioning may be maintained during further zoom operations; for example, a magnification display operation 410 detected in the second state 1206 may result in a third state 1210 in which the individual media objects 104 are sized accordingly, but remain positioned with respect to the first axis 1204 and the second axis 1208. In this manner, in accordance with the techniques presented herein, in addition to representing the "drill-down" aspect of the scalable media presentation 414, the media objects 104 of the media collection 102 may be disposed within the presentation space.
As a fifth variation of this third aspect, the presentation of the media collection 102 may be adjusted differently for different users 106. As a first example of this, for a particular media collection 102, a first user 106 may assign a first set of scores 302 to corresponding media objects 104 of that media collection 102, and an embodiment of the techniques may present the media collection 102 as a first scalable media presentation 402 using the first set of scores 302. A second user 106 may assign a different second set of scores 302 to the corresponding media object 104 of the media collection 102 and an embodiment of these techniques may utilize the second set of scores 302 to present the media collection 102 as a second scalable media presentation 402 (e.g., by explicitly assigning scores 302, by interactions with the media objects 104 of the media collection 102 from which the scores 302 may be inferred, or by identifying the subject associated with the corresponding media object 104 and the relative interests of the user 106 in the depicted subject). Further, the set of scores 302 for different users 106 may be retained (e.g., as part of a user profile for the corresponding user 106 stored by the service presenting the media collection 102 for the user 106, or as a cookie on the device of the corresponding user 106) such that when the user 106 re-accesses the media collection 102, the scalable media presentation 402 may be generated with the scores 302 previously assigned by that user 106. As a second example of this, the scores 302 of the media objects 104 assigned by the first user 106 (including the set of users) may be used to present the media objects 104 to the second user 106 (e.g., present media objects 104 in the media set 102 that have been identified as popular by other users 106, or present media objects 104 scored by the first user 106 on behalf of the second user 106). As a third example of this, the second user 106 may alter the scalable media presentation 402 generated by the first user 106 (e.g., the initial set of scores 302 assigned by the first user 106 and reassigned by the second user 106) to generate a scalable media presentation 402 of the media collection 102 customized by the second user 106 and customized for the second user 106. Those skilled in the art may envision many ways that allow multiple users to generate and customize a scalable media presentation 402 that may be compatible with the techniques presented herein.
E. Computing environment
FIG. 13 gives an illustration of an exemplary computing environment within a computing device 1302 in which the techniques presented herein may be implemented. Exemplary computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile telephones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, and distributed computing environments that include any of the above systems or devices.
FIG. 13 illustrates an example of a system 1300 that includes a computing device 1302 configured to implement one or more embodiments provided herein. In one configuration, computing device 1302 includes at least one processor 1306 and at least one memory component 1308. Depending on the exact configuration and type of computing device, the memory component 1308 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or intermediate or hybrid types of memory components. This configuration is illustrated in fig. 13 by dashed line 1304.
In some embodiments, device 1302 may include additional features and/or functionality. For example, device 1302 may include one or more additional storage components 1310 including, but not limited to, hard disk drives, solid state memory devices, and/or other removable or non-removable magnetic or optical media. In one embodiment, computer-readable and processor-executable instructions to implement one or more embodiments provided herein are stored in storage component 1310. Storage component 1310 may also store other data objects, such as components of an operating system, an executable binary that constitutes one or more applications, programming libraries (e.g., Application Programming Interfaces (APIs)), media objects, and documents. Computer readable instructions may be loaded into memory component 1308 for execution by processor 1306.
Computing device 1302 may also include one or more communication components 1316 that allow computing device 1302 to communicate with other devices. The one or more communication components 1316 may include, for example, a modem, a Network Interface Card (NIC), a radio frequency transmitter/receiver, an infrared port, and a Universal Serial Bus (USB) USB connection. Such a communication component 1316 may include a wired connection (connected to a network by a physical wire, cable, or wire) or a wireless connection (such as wireless communication with a networked device by visible light, infrared, or one or more radio frequencies).
Computing device 1302 may include one or more input components 1314 such as a keyboard, mouse, pen, voice input device, touch input device, infrared camera, or video input device, and/or one or more output components 1312 such as one or more displays, speakers, and printers. Input component 1314 and/or output component 1312 may be connected to computing device 1302 by a wired connection, wireless connection, or any combination thereof. In one embodiment, input components 1314 or output components 1312 from another computing device may be used as input components 1314 and/or output components 1312 of computing device 1302.
The various components of computing device 1302 may be connected together by various interconnects, such as a bus. Such interconnects may include Peripheral Component Interconnects (PCI), such as PCI Express, Universal Serial Bus (USB), firewire (IEEE 1394), optical bus structures, and so forth. In another embodiment, various components of computing device 1302 may be interconnected by a network. For example, memory component 1308 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device 1320 accessible via network 1318 may store computer readable instructions to implement one or more embodiments provided herein. Computing device 1302 may access computing device 1320 and download a part or all of the computer readable instructions for execution. Alternatively, computing device 1302 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 1302 and some at computing device 1320.
F. Use of the term
As used in this application, the terms "component," "module," "system," "interface," and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Various operational embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which when executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Persons skilled in the art will recognize alternative orderings given the benefit of this description. Further, it should be understood that not all operations are necessarily present in each embodiment provided herein.
Moreover, the term "exemplary" as used herein is intended to serve as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "exemplary" is intended to present concepts in a concrete fashion. The term "or" as used in this application means an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X employs A or B" shall mean any of the natural inclusive permutations. That is, if X employs A, X to employ B or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. In addition, "a" and "an" as used in this application and the appended claims may generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.
Further, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "includes," has, "" with, "and the like are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.

Claims (15)

1. A method of rendering a media collection comprising at least one media object on a device having a processor, the method comprising:
sending instructions to the device that, when executed on a processor, cause the device to present the media collection by:
identifying a score for each media object within the media collection;
upon receiving a request to present a media collection, presenting a scalable media presentation at a low zoom level that includes only media objects with high scores; and
upon receiving a request to zoom the scalable media presentation in proximity to the zoomed media object, inserting a media object associated with the zoomed media object and having a lower score than the zoomed media object in proximity to the zoomed media object.
2. The process of claim 1:
the media collection comprises media streams;
the corresponding media object comprises a media segment within the media stream; and is
Inserting the media object associated with the zoomed media object comprises: inserting a media segment having a lower score than the scaled media object in proximity to the scaled media object.
3. A method of generating a media presentation of a media collection comprising at least one media object on a device having a processor, the method comprising:
executing on a processor instructions configured to:
for a corresponding media object, identifying a score for the media object within the media collection;
presenting the scalable media presentation at a low zoom level comprising only media objects of the media collection having a high score; and
upon transitioning to a zoomed state in proximity to a zoomed media object:
presenting, in proximity to the zoomed media object, a media object associated with the zoomed media object and having a lower score than the zoomed media object; and
and adjusting the size of the corresponding media object according to the zooming state and the grade of the media object.
4. The method of claim 3, at least one media object comprising a media subset of said media collection.
5. The method of claim 3, identifying a score for the corresponding media object comprising: a score for the media object is received from the user.
6. The method of claim 5, receiving a score for the media object from the user comprising:
presenting to a user individual media objects of a media collection;
monitoring user interaction with corresponding media objects of a media collection; and
and selecting the grade of the corresponding media object according to the interaction between the user and the media object.
7. The method of claim 3, identifying a score for the corresponding media object comprising: the score of the media object is selected based on at least one attribute of the media object.
8. The method of claim 7:
corresponding media objects of the media collection are grouped into at least one media object group; and is
Selecting a score for the media object includes:
for a corresponding media object grouping:
selecting a high score for the representative media object; and
a low score is selected for other media objects of the media object group than the representative media object.
9. The method of claim 7, selecting a score for the media object comprising:
identifying a media object quality of a corresponding media object; and
and selecting the grade of the corresponding media object according to the quality of the media object.
10. The method of claim 7:
at least one media object is associated with at least one subject, the corresponding subject having importance to the user; and is
Selecting a score for the corresponding media object comprises:
identifying at least one subject associated with the media object; and
the score for a media object is selected based on the importance of the subject associated with the media object.
11. The method of claim 7:
the individual media objects of the media collection are arranged according to a hierarchical structure; and is
Selecting a score for the corresponding media object comprises: the score for a media object is selected based on the hierarchical position of the media object within the hierarchical structure.
12. The method of claim 3, presenting the scalable media presentation comprising: only media objects that are adjusted above a minimum dimension threshold are presented in a scalable media presentation.
13. The method of claim 12, presenting the scalable media presentation comprising:
transferring the media object into the scalable media presentation upon transitioning to a higher zoom level in which the media object is adjusted above a minimum scale threshold; and
the media objects are rolled out of the scalable media presentation upon transitioning to a lower zoom level where the media objects are adjusted below a minimum scale threshold.
14. The method of claim 12, presenting the scalable media presentation in the zoom state comprising: for a corresponding media object associated with at least one hidden media object adjusted below a minimum scale threshold, a zoom indicator indicative of the at least one hidden media object is presented in proximity to the media object.
15. The method of claim 14, comprising: upon detecting interaction with the zoom indicator, the scalable media presentation is transitioned to a higher zoom level that includes at least one hidden media object adjusted above a minimum scale threshold.
HK14104785.5A 2011-06-17 2012-06-10 Hierarchical, zoomable presentations of media sets HK1191701B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/163,353 2011-06-17

Publications (2)

Publication Number Publication Date
HK1191701A HK1191701A (en) 2014-08-01
HK1191701B true HK1191701B (en) 2017-12-01

Family

ID=

Similar Documents

Publication Publication Date Title
US11340754B2 (en) Hierarchical, zoomable presentations of media sets
US11023666B2 (en) Narrative-based media organizing system for transforming and merging graphical representations of digital media within a work area
KR102161230B1 (en) Method and apparatus for user interface for multimedia content search
CN107430483B (en) Navigation event information
US7908556B2 (en) Method and system for media landmark identification
CN110678861B (en) Image selection suggestions
CN113841138B (en) Display assets at multiple zoom levels in the Media Library
US9449027B2 (en) Apparatus and method for representing and manipulating metadata
WO2008014408A1 (en) Method and system for displaying multimedia content
US20120213493A1 (en) Method for media reliving playback
TW201137729A (en) Gallery application for content viewing
WO2012115829A1 (en) Method for media browsing and reliving
US20120151413A1 (en) Method and apparatus for providing a mechanism for presentation of relevant content
US12493642B2 (en) Method for line up contents of media equipment, and apparatus thereof
US20180268049A1 (en) Providing a heat map overlay representative of user preferences relating to rendered content
JP2014146248A (en) Display control apparatus, display control method, and program
Patel et al. An evaluation of techniques for browsing photograph collections on small displays
HK1191701B (en) Hierarchical, zoomable presentations of media sets
HK1191701A (en) Hierarchical, zoomable presentations of media sets
Aaltonen Facilitating personal content management in smart phones