1 Description Title of Invention: METHOD AND APPARATUS FOR ENCODING AND DECODING MOTION VECTOR BASED ON REDUCED MOTION VECTOR PREDICTOR CANDIDATES [I] The present application is a divisional application from Australian Patent Application No. 2011207924, the entire disclosure of which is incorporated herein by reference. Technical Field [la] Apparatuses and methods consistent with exemplary embodiments relate to encoding and decoding a motion vector, and more particularly, to predictive encoding and decoding a motion vector of a current block. Background Art [2] In a codec such as MPEG-4 H.264/MPEG-4 advanced video coding (AVC), motion vectors of previously encoded blocks adjacent to a current block may be used to predict a motion vector of the current block. A median of the motion vectors of the previously encoded blocks adjacent to a left side, an upper side, and a right upper side of the current block is used as a motion vector predictor of the current block. A motion vector of a current block is not directly encoded and instead, a difference between a motion vector and a motion vector predictor is encoded. Disclosure of Invention [3] One or more exemplary embodiments provide a method and apparatus for predictive encoding and decoding a motion vector, and a computer readable recording medium having recorded thereon a computer program for executing the method. Advantageous Effects of Invention [4] According to an exemplary embodiment, when the motion vector predictor candidates are used to predictive encoding and decoding a motion vector, the number of the motion vector predictor candidates may be reduced to predictive encode and decode a motion vector. Accordingly, information required to specify the motion vector predictor used to predict a motion vector of a current block from among the motion vector predictor candidates may be encoded with a minimum bit so that compression ratio of encoding/decoding a motion vector is increased and thereby, compression ratio of encoding/decoding an image may be improved. Brief Description of Drawings [5] The above and other features and advantages will become more apparent by describing in detail exemplary embodiments with reference to the attached drawings in which: [6] FIG. 1 is a block diagram of an apparatus for encoding an image, according to an exemplary embodiment; 2 [7] FIG. 2 is a block diagram of an apparatus for decoding an image, according to an exemplary embodiment; [8] FIG. 3 illustrates hierarchical coding units according to an exemplary embodiment; [9] FIG. 4 is a block diagram of an image encoder based on a coding unit, according to an exemplary embodiment; [10] FIG. 5 is a block diagram of an image decoder based on a coding unit, according to an exemplary embodiment; [i1] FIG. 6 illustrates a maximum coding unit, a sub coding unit, and a prediction unit, according to an exemplary embodiment; [12] FIG. 7 illustrates a coding unit and a transform unit, according to an exemplary embodiment; [13] FIGS. 8A through 8D illustrate division shapes of a coding unit, a prediction unit, and a transform unit, according to an exemplary embodiment; [14] FIG. 9 is a block diagram of an apparatus for encoding a motion vector, according to an exemplary embodiment; [15] FIGS. 10A and 10B illustrate motion vector predictor candidates, according to an exemplary embodiment; [16] FIGS. 10C through 10E illustrate blocks having various sizes that are adjacent to a current block, according to an exemplary embodiment; [17] FIGS. 1 1A through 1 1C illustrate motion vector predictor candidates, according to another exemplary embodiment; [18] FIG. 12 illustrates a method of reducing motion vector predictor candidates, according to an exemplary embodiment; [19] FIGS. 13A through 13D illustrate a location of a current block included in a coding unit having a predetermined size, according to an exemplary embodiment; [20] FIG. 14 is a block diagram of an apparatus for decoding a motion vector, according to an exemplary embodiment; [21] FIG. 15 is a flowchart illustrating a method of encoding a motion vector, according to an exemplary embodiment; and [22] FIG. 16 is a flowchart illustrating a method of decoding a motion vector, according to an exemplary embodiment. Best Mode for Carrying out the Invention [22a] According to a first aspect, the present invention provides an image decoding method comprising: decoding, from a bitstream, information about motion vector difference for a current block and information about motion vector predictor for the current block; generating motion vector predictor candidate group; modifying the motion vector predictor candidate group based on values of motion vector predictor candidates in the motion vector predictor candidate group and the number of motion vector predictor candidates in the motion vector predictor candidate group; determining a motion vector 3 predictor for the current block based on the motion vector predictor candidate group and the information about motion vector predictor; and determining a motion vector for the current block based on the motion vector predictor and the information about motion vector difference, wherein, the modified motion vector predictor candidate group includes at least one of 1st motion vector predictor candidates and a 2nd motion vector predictor candidate, the 1st motion vector predictor candidates are based on motion vectors of neighboring blocks of the current block, the 2nd motion vector predictor candidate is based on a motion vector of collocated block to the current block, and the collocated block is located in a reference picture, wherein the neighboring blocks includes a first block located in lower left side of the current block, and includes a second block located in upper side of the first block. [23] There may be provided a method of encoding a motion vector, the method including: generating information about the motion vector based on a motion vector of a current block and a motion vector predictor of the current block by estimating the motion vector of the current block and determining a first motion vector predictor candidate from among a plurality of motion vector predictor candidates as the motion vector predictor of the current block based on a result of the estimating, the first motion vector predictor candidate included among the plurality of motion vector candidates; generating a virtual motion vector by using a second motion vector predictor candidate from among the plurality of motion vector predictor candidates and the information about the motion vector, generating vector differences between the virtual motion vector and the plurality of motion vector predictor candidates, comparing the vector differences with the information about the motion vector, and selectively excluding the second motion vector predictor candidate from among the plurality of motion vector predictor candidates; and encoding the information about the motion vector and information about the motion vector predictor of the current block. [24] There may be provided a method of decoding a motion vector, the method including: decoding information about a motion vector of a current block; generating a virtual motion vector by using a predetermined motion vector predictor candidate from among a plurality of motion vector predictor candidates and the decoded information about the motion vector, generating vector differences between the virtual motion vector and the plurality of motion vector predictor candidates, comparing the generated vector differences with the decoded information about the motion vector, and selectively excluding the predetermined motion vector predictor candidate from among the plurality of motion vector predictor candidates; and determining a motion vector predictor of motion vector predictor candidates that are not excluded from among the plurality of motion vector predictor candidates as a motion vector predictor of the current block and restoring the motion vector of the current block based on the determined motion vector predictor and the decoded information about the motion vector, 4 the determined moon vector candidate included among the plurality of motion vector candidates. [251 There may be provided an apparatus for encoding a motion vector, the apparatus including: a motion vector estimator which generates information about the motion vector based on a motion vector of a current block and a motion vector predictor of the current block by esumatng the notion vector of the current block and determines a first motion vector predictor candidate from among a plurality of motion vector predictor candidates as the motion vector predictor of the current block based on a resuk of te estimating, the First motion vector predictor candidate included among the plurality of motion vector predictor candidates; a candidate determiner which generates a virtual motion vector by using a second motion vector predictor candidate from among the plurality of moton vector predictor candidates and the information about the motion vector, generates vector differences between the virtual motion vector and the plurality of motion vector predictor candidates, compares the vector differences with the information about the modon vector, and selectively excludes the second motion vector predictor candidate from among the plurality of motion vector predictor candidates; and a motin vector encoder which encodes the information about the motion vector and information about the motion vector predictor of the current boek [26] There may be provided an apparatus for decoding a motion vector, the apparatus including: a motion vector decoder which decodes infonation about a motion vector of a current block; a candidate determiner which generates a virtui motion vector by using a predetermined motion vector predictor candidate from among a plurality of motion vector predictor candidates and the decoded information about the motin vector, generates vector differences between the virtual motion vector and the plurality of moton vector predictor candidates, compares the generated vector differences with the decoded information about the motion vector, and selectively excludes the predetermined motion vector predictor candidate from among the plurality of motion vector predictor candidates; and a motion vector restoring unit which determines a motion vector predictor candidate of motion vector predictor candidates that are not excluded from among the plurality of motion vector predictor candidates as a i-onon vector predictor of the current block and restores the motion vector of the current block based on the determined moatn vector predictor and the decoded information about the moion vector, the determined motion vector candidate included among the plurality of motion vector predictor candidates. [27] There may be provided a computer readable recording medium having embodied thereon a computer program for executing the methods of encoding and decoding a motion vector. Mode for the invention [28] Hereinafter, one or more exemplary embodiment will be described more fully with reference to the accompanying drawings. Expressions such as "at least one of," when 4a preceding a list oftelements, modify the entire list of elements and do not modify the individual elements of the list. [29] Hereinafter, an image' may denote a still image for a video or a moving image, that is he video itself. [30] FIG. I is a block diagram of an apparatus 100 for encoding an image, according to an exemplary embodiment. [31] Referring to FIG. , the apparatus 100 for encoding an image includes a naximaum coding unit divider 110, an encoding depth determiner 120, an image data encoder 130, and an encoding information encoder 140, [32] The maximum coding unit divider 110 may divide a current frame or slice based on a maximum coding unit that is a coding unit of the largest size. That is, the maximum coding unit divider 110 may divide the current frame or slice into at least one maximum coding unit [33) According to an exemplary enbovdiment, a coding unit may be represented using a maximum coding unit and a depth. As described above, the maximum coding unit indicates a coding unit having the largest size from among coding units of the current frame, and the depth indicates a degree of hierarchically decreasing the coding unit. As a depth increases, a coding unit may decrease from a maximum coding unit to a minimum coding unit wherein a depth o the maxinom coding unit is defined as a minimum dept and a depth of the minimum coding uni is defined as a maximum depth. Since the size of a coding unit decreases from a maximum coding unit as a depth eases, a sub coding unit of a kth depth may include a plurality of sub codin units of a tkn)th depth (k and n are integers equal to or greater than I); [341 According to an increase of the size of a frame to be encoded, encoding an image in a reuter coding unit may cause a higher image compression ratio. However, if a greater coding unit is fixed, an image may not be efficiently encoded by reflecting con tinuously changing image characteristics. 351 For example, when a smooth area such as the sea or sky is encoded, the greater a coding unit is, the more a compression ratio may increase, However, when a complex area such as people or buildings is encoded, the s'aier a coding unit is, the more a compress ion ratio may increase. [361 Accordingly, according to an exemplary embodiment, a different maximum coding unit and a differed maximum depth may be set for each frame or slice. Since a maximum depth denotes the inxinum number of times by which a coding unit may decrease, the size of each minimum coding unit included in a maximum coding unit may be variably set according to a maximum depth. [371 The encoding depth determiner 120 determines a maximum depth. The maximum depth may be determined based on calcutation of rate-distortion (RD) costs, The maximum lepth may b determined differently for each frame or slice or for each maximum oding unit, The determined maximum depth is provided to the encoding in formation encoder 140, and image data according to maximum coding units is provided to the iiage data encoder 130. 38] The maximum depth may denote a coding unit having the smallest size that may be included in a maximum coding unit, that is, a minimum coding unit. In other words, ah maximum coding unit mny be divided into sub coding units having different sizes according to different depths, as will be described later with reference to FIGS, 8A and SB, Also, the sub coding units having different sizes, which are included in the maximum coding unit, may be predid or transformed based on processing units having different sizes. The transform is performed to transform pixel values of a spatial domain to coefficients of a frequency domain and may be discrete cosine transform or Karhunen Loever transform (KLT). In other words, the apparatus 100 for encoding an image may periormn a plurality of processing operations for image encoding based on processing units having vaNous sizes and various shapes. To encode image data processing operations such as at least one of prediction, transform, and emropy encoding arc performed, wherein processing units hingng the same size or different sizes may be used for every operation, [391 For example, the apparatus 100 for encoding an image may select a processing unit that is different from a coding unit to predict the coding unit 40] When the size of a coding unit is 2Nx2N (where N is a positive integer processing units for prediction may be 2Nx2N, 2NxN, Nx2N, and NxN, In other words motion prediction may be performed based on a processing unit having a shape whereby at least one of height and width of a coming unit is equally divided by tvo. Hereinafer 5 a processing unit, which is the base of prediction, is defined as a 'prediction unit'. [41) A prediction mode may be at least one of an inira mode, an inter mode, and a skip mode, and a specific prediction mode may be performed for only a prediction unit havingz a specific size or shape. For example, the imrn mode may be performed for only prediction units having the sizes of 2Nx2N and NxN of which the shape is a square, Further, the skip mode may be performed for only a prediction unit having the size of 2Nx2N. If a plurality of prediction units exist in a coding unit, the prediction mode with the least encoding errrs may be selected after performing prediction for every prediction unit. 142] Alternatively, the apparatus 1.00 for encoding an image may perform transform on mage data based on a processing unit having a different size from a coding unit. For the transform in die coding unit, the transform may be performed based on a processing unit having a size equal to or smaller than that of the coding unit. Hereinafter, a processing unit, which is the base of transform, is defined as a 'transform unit' [43 The encoding depth determiner 120 may deternine sub coding units included in a maximum coding unit using RI) optimization based on a Lagrangian muliplier. In other words, the encoding depth determiner 120 may determine which shape a pluraity of sub coding units divided from the maximum coding unit have, where the plualy of sub coding units have different sizes according to their depths, The image data encoder 130 outputs a bitstream by encoding the maximum coding unia Ned on the division shapes determined by the encoding depth determiner 120. 4 The encod information encoder 140 encodes informaton about an encoding mUde of the maximum coding unit determined by the encoding depth determiner 120. In other words the encoding informaion encoder 140 outputs a bistream by encoding in formation about a division shape of the maximum coding unit, information about the aximum depth, and information about an encoding mode of a sub coding unit for 7 each depth. The information about the encoding mode of the sub coding unit may include information about a prediction unit of the sub coding unit, information about a prediction rode for each prediction unit, and information about a transform unit of the sub coding unit [451 The information about the division shape of the maximum coding uni may be in formation indicating whether each coding unit is divided. For example when the maximum coding unit is divided and encoded, information indicating whether the maximum coding unit is divided is encoded Also, when a sub coding unit divided from the maxinom coding unit is divided and encoded, information indicating whether the sub coding unit is divided is encoded, The information indicating whether the sub coding unit is divided may be flag informiion, [46] Since sub coding units having different sizes exist for each maximum coding unit and information about an eneoding mode must be determined for each sub coding unit, information about at least one encoding mode may be determine. for one naximum coding unit. [471 The apparatus 100 for encoding an image may generate sub coing unit.' by equally dividing both height and width of a maximum coding unit by two according to an increase of depth. That i, when the size of a coding unit of a kth depth is 2Nx2N, the size of a coding unit of a (k+I)th depth is NxN [484 Accordingly, the apparatus 100 for encoding an image may determine an optimal division shape for each minimum coding unit based on sizes of maximum coding unit and a maximum depth in consideration of image characteristics By vriably adjusting the size of a maximum coding unit in consideration of image characmeristics and encoding an image through division of a maximum coding unit into sub coding units of different depths, images having various resolutions rmay be more efficiently encoded. [49] FIG. 2 is a block diagram of an apparaus 200 for decoding an image, according to an exemplary embodiment [51 Ieferring to FIG. 2, the apparatus 200 flr decoding an image includes an image data acquisition unit 210, an encoding formation extractor 220, and an image data decoder 230 1511 The image data acquisltion unit 210 acquires image data according to maximum coding units by parsing a bitstream received by the apparatus 200 for decoding an image and outputs the image data to the image data decoder 230, The image data ac quisition unit 210 may extract information about a maximum coding nmt of a current frame or slce from a header of the current frame or shice, In other words, the image data acquisition unit 210 divides the bitstream in the maximum coding unit so that the image daa decoder 230 may decodethea mage da according tomaxmum coding unos 8 152] The encoding information extractor 220 extracts information about a maxmum coding unit, a maximum depth, a division shape of the maximum coding unit, an encoding mode of sub coding units from the header of the current fame by parsing the bitstream received by the apparatus 200 for decoding an image. The information about a division shape and the information about an encoding mode are provided to the image data decoder 230. [531 The information about a division shape of the maximum coding unit may include in formation about sub coding units having different sizes according to depths and included in the maximum coding unit, and may he information (e.g., flag information) indicating whether each coding unit is divided. The information about an encoding mode may include information about a prediction unit according to sub coding uni, information about a prediction mode, and information about a transform unit E541 The inage data decoder 230 restores the current frame by decoding image data of every maimurn coding unit based on the innnation extracted by the encoding in formation extractor 220. [551 The image data decoder 230 may decode sub coding units included in a maximum coding unit based on the information about a division shape of the maximum coding unit, A decoding process may include an inter prediction process including intra prediction and motion compensation and an inverse transform process. [564 The image data decoder 230 may perform intra prediction or inter prediction based on information about a prediction unit according to sub codng units and information about a prediction mode in order to predict a sub coding unit. The image data decoder 230 may also perform inverse transform for each sub coding unit based on information about a transform unit of a sub oding unit, 157j FIG. 3 illustrate hierarchical coding units according to an exemplary embodiment (38] Referring to FI. 3, the hiemrchical coding units may include coding units whose widthxheights are 64x64, 32x32, i6xi6, 8,x and 4x4. Besides these coding units having perfect square shapes, coding units whose widthxheights are 64x32, 32x64, 32x6, 6x32, 16x., 8x16, 8x4, and 4x may also exist [591 In FIG. , for image data 310 whose resolution is I920x1080, the size of a maximun coding unit is set to 64x64, and a maximum depth is set to 2 [60] For image data 320 whose resolution is 1920x 1080, the size of a maximum coding unit is set to 64x64, and a maxinu depth is set to 3. For image data 330 whose resolution is 352x288, the size of a maxinum coding unit is set to 16x 16, and a uaxinum depth is set to 2, [61] When the resolution is high or the amount of data is great, a maximum size of a coding unit may be relatively great to increase a compression ratio and exactly reflect image characteristics. Accordingly, for the image data 310 and 320 havig igher 9 resolution than the ig data 30, 4x6m e selected as the sire of a maxim coding unit. 62] A maximum depth indicates the total number of layer in the hierarchical coding units. Since the maximum depth of the image data 310 is 2, a coding unit 315 of Ihe image data 310 may include a maximum coding unit whose longer axis size is 64 and sub coding units whose longer ax is sizes are 32 and 16, according to an increase of a depth, [63 On the other hand, since the maximum depth of the image data 330 is 2, a coding unit 335 of the image data 330 may include a maximum coding unit whose longer axis size is 16 and coding units whose longer ax is sizes ae and 4, according to an increase of a depth. 64 However, since the maximum depth of the image data 320 is 4, a codin- unit 325 of the image data 320 may include a maximum coding unit whose longer axis size is 64 and sub coding units whose loger axis sizes ue 32, 16, 8 and 4 according to an increase of a depth. Since an image is encoded based on a smaller sub coding uni as a depth increases nn exemplary embodiment is suitable for encoding an image including more minute scenes. [65] PIG. 4 is a block diagram of an image encoder 400 based on a coding unit, according to an exemplary embodiment, 6 An intra prediction unit 410 performs intra prediction on prediction units of the intra mode in a current frame 405, and a motion esimator 420 and a motion corpensawor 425 perform inter prediction and moton compensation on prediction units oft e inter mode using the current frame 405 and a reference frane 495. [671 Residual values are generated based on the precdicion units output from the int prediction unit 410, the motion estimator 420, and te moton ompensa 4 and the generated residual values are output as quantized transform coefficients by pasung through a transformer 430 and a quantizer 440, 681 The quantzed transform coefficients are restored to residual values by passing through an inverse quantizer 460 and an inverse transormer 470, and the restored residual values are pos processed by passing through a deblocing unit 480 and a loop filtering unit 490 and output as the reference frame 495. The quantzed transform coef ficients may be output as a bitstream 455 by passMg through an entropy encoder 450. 69] To perform encoding based on ua encoding methotsd according to an exemplary em bodinent, components of the image encoder 400, ie., de int-a predicion unit 4, the motion aszimator 420, the motion compensator 425, the transformer 430. the quantizer 440, the entropy encoder 450, the inverse quanitizer 460, the inverse iransfoirer 470, the deblocking unit 480 and the loop filtering unit 490, perform image encoding proceasses based on a maximoum coding unit, a, sub coding ui codnt et.
prediction unit, and a transform unit, [701 FIG, 5 is a block diagram of an image decoder 500 based on a codinA uni. according| to an exemplary embodiment Referring to FIG. 5, a bitstream 505 passes through a parser 510 'o that encoded image data to be decoded and encoding information used for decoding are parsed. The encoded imnge data is output as inverse-quantized data by passing through an entropy decoder 520 and an inverse qoanizer 530 and restored to residual values by passing through an inverse transformer 540. The residual values are restored according to coding units by being added to an intra prediction result of an intra prediction unit 550 or a motion compensation result of a motion compensator 560, The restored codin units are used for prediction of next coding units or a next frame by passing through a debiocking unit 570 and a loop filtering unit 580, [721 To perform decoding based on a decoding method according to an exemplary em bodiment, components of the image decoder 500, i.e. the parser 510, the enUpy deco er 520, the inverse quanizer 530, the inverse transformer 540, the intra prediction unit 550, the roti compensatory 560, the deblocking unit 570 and the iop fhitering unit 580, perform image decoding processes based on a maximum coding unit a sub coding unit according to depths, a prediction unit and a transform unit. n particular, the intra prediction unit 550 and the motion compensator 560 determine a predtio unit and a prediction mode in a sub codig umt by considering a maximum coding unit and a depth, and the inverse transformer 540 performs inverse transform by considering the size of a transform unit. 741 G. 6 iustes a maximum coding unit, a sub coding unit, and a prediction unit according to an exemplary embodiment, [751 The apparatus 100 fo encoding an image iliust atedin FG. I and the apparatus 200 for decoding an image illustrate in FIG. 2 use hierarchical ending units to perorm encoding and decoding in consideration of image characteristics. A maximum coding unit and a maximum depth may be adaptively set according to the image charac teristics or variably set according to requirements of a user. In G. 6, a hierarchical coding unit structure 600 has a maximum coding unit 60 whse height and width are 64 and maximum depth is 4. A depth increases along a 'ertical axis of the hierarchical coding unit structure 600. andI as a depthnincreases, heights and widths of sub coding units 620 to 650 decrease, Prediction units of the maximum coding unit 610 and the sub coding units 620 to 650 are shown along a horizontal axis of the hierarchical coding unit structure 600. [771 The maximum coding unit 610 has a depth of 0 and the size of a coding unit he high and width, of 6464. A depth increases along the vertical axis, and there exist a sub coding unit 620 whose size is 32x32 and depth is 1, a sub coding unit 630 whose 11 size is 1616 and depth is 2, a sub coding unit 640 whose size is Sx8 and depth is 3, and a sub coding unit 650 whose size is 44 and depth is 4. The sub coding unit 650 whose size is 4x and depth is 4 is a minimum coding unit. [781 Referring to FIG. 6, examples of a prediction unit are shown along the horizontal axis according to each depth. That is, a prediction unit of the maximum coding unit 6 10 whose depth is 0 may be a prediction unit whose size is equal to the coding unit 610, i.e, 64x64, or a prediction unit 612 whose size is 64x32, a prediction unit 61.4 whose size is 32x64, or a prediction unit 616 whose size is 32x32, which has a size smaller than the coding unit 610 whose size is 64x64. A prediction unit of the coding unit 620 whose depth is I and size is 32x2 may be a prediction unit whose size is equal to the coding unit 620, i.e1, 32x2, or a prediction unit 622 whose size is 32x16. a prediction unit 624 whose size is 16x32, or a prediction unit 626 whose size is 16x i, which has a size simalier than the coding unit 620 whose size is 32x32. [801 A prediction unit of the codng unit 630 whose depth is 2 and size is 16x6 may be a edition unit whose size is equal to the coding unit 630 i.e, 16, or a prediction unit 632 whose size is 16x8, a prediction unit 634 whose size is 8x16, or a prediction unit 636 whose size is Sx8, which has a size smaller than the coding unit 630 whose size is 16x6. [811 A prediction unit of the coding unit 640 whose depth is 3 and size is x8 may be a prediction unit whose size is equal to the coding unit 640, i "e 8x8, or a predicton uni 642 whose size is 8x4, a prediction unit 644 whose size is 4xK or a prediction unit 646 Wvose size is 4x4, which has a size smaller than the codin unit 64 whose ze is 8x 82] inalluy, the coding unit 650 whose depth is 4 and size is 4x4 is a coding nit of a maximum depth, and a prediction unit of the coding unit 650 may be a prediction unit 650 whose size is 4x4, However, the coding unit of a maximum depth may not be alwavs the same as the size of the prediction uni Similar to the coding units 610 through 650, the coding unit of a mxinm depth may be divided into prediction units having maler sizes than the coding nit and may perform prediction. [831 FIG. 7 lustrates a coding unit and a transform unit, according to an exemplary em bodimeat. [841 The apparatus 100 for encoding an image lustrated in FG. I and the appatuws 200 or decoding an image illustrated in PIG. 2 perform encoding and decoding with a maximum coding unit itself or with sub coding units, which are equal to or smaler than the maximum coding unt, divided from the maximum coding unit. In the encoding and decoding process, the size of a transform unit for transform is selected to be for the highest compresion ratio regardless of the coding unit and the prediction unit. For example, refoing to FIG. 7, when a current coding unit 710 has the size of 64x64, transform may be perfonned using a transform unit 720 having the size of 32x32, 1851 FIGS. 8A through 8D illustrate division shapes of a coding unit 810, a prediction unit 860, and a ransform unit 870, according to an exemplary embodiment [86] FiGS. 8A and 8B illustrate a coding unit 810 and a fiction unit 860, according tO an exemplary embodiment, 87 FIG, 8A shows a division shape selected by the apparatus 100 for encoding an image illustrated in FIG. I, in order to encode a maximum coding unit 810. The apparatus 100 for encoding an image divides the maximum coding unit 810 into various shapes, performs encoding, and selects an optimal division shape by comparing encoding results of various division shapes with each other based on RD costs. When it is optimal that the maximum coding unit 810 is encoded as it is, the maximum coding unit 810 may be encoded without dividing the maximum coding unit 810 as illustrated in FIGS. 8A though 8D. [881 Referring to FIG. 8B, the maximum coding unit 810 whose depth is 0 is encoded by dividing it ito sub coding units whose depths are equal to or greater than 1, That is, the maximum codin unit 810 is divided into 4 sub coding units whose depths are L and all or some of the sub coding units whose depths are i are divided into sub coding units whose depths are 2, [891 A sub coding unit located in an upper-right side and a sub coding unit located in a lower-left side among the sub coding units whose depths are I are divided into sub coding units whose depths are equal to or greater than 2. Some of the sub coding units whose depths are equal to or greater than 2 may be divided into sub coding units whose depths are equal to or greater than 3. 901 Fio B shows a division shape of a prediction unit 860 for the maximum coding unit 810. [91] Referring to FIG, 8B, a prediction unit 860 for the maximum coding uni 8 may b div ided differently from the maximum coding unit 810. in other words, a prediction unit for each of sub coding units may be smaller than a corresponding sub coding unit [92] For example, a prediction unit for a sub coding unit 854 locad in a ]owergh de among the sub coding units whoe depths are I may be smaller han the sub coding unit 854. in addition, prediction units tor soern sub codng units 814, 816, 850, and 852 from among sub coding units 814, 816, 818, 828, 85, and 852 wAhse depths are 2 may be smnalier than the sub coding units 81 , 816, 850, and 852, respectively. [93] In addition, prediction units for sub coding units 822, 832 and 848 whose depths are 3 may be smaller than the sub coding units 822, 832, and 848, respectively. The prediction units may have a shape whereby respective sub coding units are equally divided by two in a direction of heigit or width or have a shape whereby respective 13 sub coding units are equally divided by four in directions of height and width, 1941 FIGS. SC and SD illustrate a predicton unit 860 and a transform unit 870, according to an exemplary embodiment [95] FIG, 8C shows a divi ion shape of a prediction unit 860 for the maximum coding unit 810 shown in FIG, SB, and FTC. 8D shows a divi sion shape of a transform unit 870 of the maximum coding unit 810 96j Referring to FIG. S ao division shape of a transform unit 870 may be set differently from the prediction unit 860, 974 For example, even though a prediction unit for the coding unit 854 whose depth is i is selected with a shape whereby the height of the coding unit 854 is equally divided by 1.wo, a transform unit may be selected with the same size as the codina unit 854. Likewise, even though prediction units for coding units 814 and 850 whose depths are 2 are selected with a shape whereby the height of each of the coding units 814 and 850 is equally divided by two, a transform uni may be selected with the same size as the original size of each of the coding units 814 and 850, 1981 A transform unit may be selected with a smaler size than a prediction unih. For example, when a prediction unit for the coding unit 852 whose depth is 2 is selected wih a shape whereby the width of the coding unit 852 is equally divided by two, a transform unit may be selected with a shape whereby the coding unit 852 is equally divided by four in directions of height and width, which has a smaller size than the *hape of the prediction unit. 979]1 FIG. 9 is a block diagram of an apparatus 900 for encoding a moon vector, according to an exemplary embodiment, [710] The apparatus 900 for encoding a motion vec tor may be included in the apparatus 10 for encoding an image illustrated in IG. 1 or the image encoder 400 illustrated in FIG. 4 is illustrated in detail in FIG. . Referring to FIG. 9, the apparatus 900 for encoding a motion vector includes a motion vector estimator 910, a candidate de terminer 920, and a motion vector encoder 930. n order to decode encoded blocks by using inter prediction, that is, temporal prediction, information about a motion vector that indicates a relative location difference between a current block and a similar biock in a reference picture is needed, Accordingly, information about a moton vector is encoded while image encoding and is inserted into a bitstream, When the information about a motion vector is directly encoded and inserted, an overhead asked to encode the information about a motion vector increases and thus compression ratio of image data is lowered. [2] Accord:ingy,3 in image encoding. a motion vector of a current block is predicted, and an original motion vector difference between a moton vector predictor generated as a result of predict n and an original motion vector is only encoded and is inserted so 14 that information about a motion vector is compressed. 1031 In predictive encodin o a motion vector, an explicit mode and an implicit mode may exist [141 In a code such as MPEG-4 H.264/MPEG-'4 advanced video codng (AVC), motion vectors of previously encoded blocks adjacent to a current block may be used to predict a motion vector of the curent block. A median of the motion vectors of the previously encoded blocks adjacent to a left side, an upper side, and a right upper sine of the currem block is used as a motion vector predictor of the current block. Since motion vector of al blocks encoded using inter prediction are predicted by using the same method, information about a motion vector predictor of a current block may not be separately encoded, However, the apparatus 100 for encoding an image or the image encoder 400 according to one or more exemplary embodiments uses both an implicit mode, in which information about a moton vector predictor is not separtely encoded, and an explicit mode, in which information about a motion vector predictor i not encoded, so as to accurately predict a motion vector. In the explicit mode, in formation about a motion vector predictor used as a motion vector predictor of a current block from among a plurality of motion vector predictor candidates is encoded and is inserted into a bitstream as a sequence parameter, a slice parameter, or a block pararmeter, [05j )IG. 9 illustrates an apparatus which performs predictive encoding while a moon vector is encoded according to an explicit mode. [1061 The motion vector estmator 910 estimates a motion vector of a current block. A block similar Ro or same as a current blok is searched in at least one reference picture and as a result of searching, a motion vector, which is a relative location difference between the current block and the searched reference picure, is estimated, A block similar to or same as the current block is searched based on 'aicuiation of a sum of absolute difference (SAO) und an a resu of serhing, a motio vctor of dt curent bock may be estimated. Also, the motion vector estimator 910 predicts a motion vector of the current block baed on motion vectors of blocks included in a previously encoded area jacent to the current block. in other words, the moion vectors of the blocks included in the previously encoded area adjacent to the current block are set to be motion vector predictor candidates, and a motion vector predictor candidate that is most similar to an estimated motion vector of a current block from among the motion vector predictor candidates is determined. [0 In codec such as M.PEG4H64/MPEG4AVGamedian of the motion vectors of thepre ouslencded block: adacent to a left side a upped nd a eight upper s e biock is used as anton etor predictor h of t rn Since a motion vector of an encoded block is predicted by using the motion vectors of the previously encoded blocks and only one motion vector predictor is used, in formation about a notion vector predictor may not be separately encoded, In other words, the number of motion vector predictors of a block encoded using inter prediction is one. "109] However, when a motion vector of a current block is accurately predicted, the motion vector may be encoded with a high compression ratio. in this regard, according to an exemplary embodiment, one of a plurality of motion vector predictor candidates is selected and used as a motion vector predictor of a currem block so that a motion vector of a currei block is encoded with high presa method of encoding a nation vector of a current black by using a plurality of motion vector predictor candidates will be describe in detail, [101 FIGS. C10A and 10B illustrate motion vector predictor candidates according to an exemplary embodiment. [i Referring to FIG. IA, in a mentno f predicting a motion vector, one of motion vectors of previously encoded blocks adjacent to a current block may be used as a motion vector predictor of the current bc :ron among blocks adjacent to an upper side of a extent block all motion vect of a block 30 at a leftmost side, a block hO at an uppemost side adjacent to the left side a block c adjacent to a right upper side. a block d adjacent to a left upper side, and a block e adjacent to a left lower side may be used as motion vector predictor candidates of the current block, 3 in a method of encoding and decoding an image according to one or more exemplary embodimemns, image encoding and decoding are perfooed based on coding units having various sizes classified by depths and thus a motion vector of the block adjacent to th etlower side may beIse a a cadiat oflthe moticn vector3 -predioctr Referring to FIG.OB, motion vectors of all blocks adjacent to a current block may be used as motion vector predictor candidates. In other words motion vectors of not only the block a0 at the leftmost side from anong the blocks adjacem to the upper side but also all blocks a) through aN adjacent to the upper side may be used as motion vector predictor candidates. Also, motion vecton of not only the block bO at the uppermost side from among the blocks adjacent to the left side but also all blocks hO through bN adjacent to the left side may be used as motion vector predictor candidates. [1151 In addition, a median of motion vectors of adjacent blocks may be used as motion vector predictor candidates. In other words, a median mvaOs mvO, or mvc may be used as a candidate of a motion vector predictor of the current block. Here, ny :v) is a otion vector of the block aW, my U is a motion vector of the bck bO, and n'y is a motion vector of the block c, 1163 Ioevr moinvco rdco cniae ftecret bock muay be rsrce according to sizes of the current block and adjacent blocks, as will be described in detail with reference to FlGS, 10C2 through 0iE. [171 FGS. 10C though I0E illusate blocks having various sizes that ar adjacent to a current block, according to an exemplary embodiment [13 As described above, in the method of encoding and decoding an image according to one or more exemplary embodiments, coding units and prediction unit having various sizes determined according to depths may be used to encode an image, Accordingly, since blocks adjacent to the current block may be of various sizes, when a size of the current block and sizes of some adjacent blocks are significantly different from each other, motion vectors of the some adjacent blocks may not be used as motion vector predictor candidates. 1I 9] Referring to FIG. 10C, blocks 1014 through 1018 adjacent to an upper side of a current block 1010 are smaller than the current block 1010. Since a motion vector of a block 1012 adjacent to the current block 1010 and having the same size as the current block 1010 may be the same or similar to a motion vector of the current block 1010, the motion vector estimator 910 may only use the motion vector of the block 1012 as a candidate of a motion vector predictor. [120] Although sezes are not the same, motion vectors of adjacent blocks having prede termined sizes or above may be only used as moton vector predictor candidates. For enmple, the motion vectors of the blocks 1012 and 1018 having sizes 1/4 times the current block 1010 or above may be only used as motion vector predictor candidates. 1 213 Referring to FIG. 1AD a size of a block 1022 adjacent to a left side of a current block 1020 is larger than a size of the current block 1020 by 16 times so that there exists a great difference in sizes, Due to the great difference, a motion vector of the block 1022 adjacent to the left side may not be the same as or similar to a motion vector of the current block 1020. Accordingly, the motion vector of the block 1022 adjacent to the left side is not used as a motion vector predictor candidate of the current block 1020 and only motion vectors of a block 1024 adjacent to an upper side and a block 1026 adjacent o a left upper side may be used as motion vector predictor candidates. 122 Referring to FIG. lME, a size of a current block 1030 i larger than sizes of all adjacent blocks 1031 through 1037, Here, when motion vectors of all adjacent blocks 1031 Through 1037 are used as moton vector predictor candidates of the current block 1030, the number of the moon vector predictor candidates of the current block 1030 may be high. As a size difference between the current block 1030 and the adjacent blocks 1031 through 1037 increases, the number of the motion vector predictor candidates also increases. Accordingly, the motion vector estimator 910 may not use moon vectors of some blocks from among the adjacent blocks as the motion vector 17 predictor candidates of the cUrren block 030. [123 For example, in FIG 10E, the motionvectors ofthe block131 adjacent t the left lower side and he blck 137 adjacent to the right upper side ma not be used as motion vector predictor candidates of the current block 1030. £124] More generally when a size of the current block 1030 is above a predetermined size, motion vectors of blocks adjacent in a specific direction from among adjacent blocks may not be used as motion vector predictor candidates of the current block 1030 [1.251 FIGS. I 0A through I iC illustrate notion vector predictor candidates according to another exemipl ary emboiiment. [1201 FIG. I!A illustrates a method of determining a motion vector predictor candidate or a bi-directional predictive picture (B picture), When a current picture including a current block is a B picture that performs b-directional prediction, a moton vector generated based on a temporal distance may be a rmotion vector predictor candidate. [1271 A onion vector predictor candidate nytamporai of a current block 1100 of a current picture 1110 may be determined by using a motion vector of a block 1120 disposed at a colVocated location with the current block 1100 of a picture 11.2 that temporally precedes the current picture 1 I10, For example, when a motion vector mv.colA of the blok 1120 deposed at the colkcated location with the current block I 100 is generated with respect to a searched block 1.122 of a picture 1114 that temporally follows the current picture 1110, mv_LOA and my_L A, which are monon vector predictor candidates of the current block 1.100, may be detemined as follows: 1281 myLi A = (ti/2) * mv_colA 129] mv...LiA = my L. A -n mycolA 301 Here, mvL A denotes a motion vector predictor candidate of the current block 110 with respect to the picture 1112 that temporally precedes the current picture 1110 and v. LIA denotes a motion vector predictor candidate of the current block 10 with respect to the picture 1114 that temporally follows the current picture 1110. 1311 In FIG. Ii A, the current picture 11 10, which is a B picture, is interposed between the picture 11.12 that temporally precedes the current picture 11.10 and he picture 1114 that temporally follows the current picture i1110. Here, when the motion vector my.colA of the block I120 d posed at the collocated location with the current block I100 Is generated with respect to the picture 11 14 that temporally follows the current picture 1110, the motion vector of the current block 1100 may be accurately predicted based on mv_L i A, i other words, the motion vector of the current block 1100 may be more accurately predicted when mv. colA is a motion vector in a direction illustrated in FIG, I IA, compared with when my colA is a motion vecor that is in a direction opposite to a direction illustrated in FIG. I1A, that is, when nywcolA is generated with respect to another picture before the picture 1112 that temporally precedes the current picture 110, 1321 Accordingly, when a diction frorn the current block 1110 to the block I120 disposed at the collocated location with the current block 1100 is ListO, the motion vector my colA needs to be in a Lisi direction and thus the current picture 1110 racy be interposed between the picture 1 12 and the picture 1114. as illustrated in FIG. I1,A so that the motion vector of the current block 1100 may be accurately predicted based on my colA. [131 Also, tepictre 1110 through 1114 ilutate d in FIG, I1 Aare illustated accordna to a time order and thus the motion vector predictor candidate mv.temporal of the current block may be generated based on a picture order count (OC). A picture referred to by the current bkck may be a picture other than the adjacent pictures 1112 and i 14 illustrated in FIG. 11 A and thus a motion vector predictor candidate of the current block is generated based on a PC [1341 For example, when a P0C of the current picture is CurrPOC and a P0C of a picture referred to by the urrent picture is CurrRePOC, the motion vector predictor candidate of the current block may be generated as follows: 135 Scale=(CurrPOC-urrRefPOC)/CoiPC CFOC 1361 mvjemporal = Seale*mv..coA 1371 fHere. CoIPOC is a POC of the picture 1112 that temporally precedes the current picture 1 10 and includes the ock 11.20 and ColRefPOC is a PC of the picture 1114 that temporary follows the current picture 1110 and includes the block 122 refened to by the block 1120, 8.~] FIG. 1 lB illustrates another method of generating a motion vector predictor candidate of a B picture. The method in FG. li is different from the method in FiG. I IA in that the picture 11 14 that tempo ily follows the cunt picture 1110 includes a block disposed at a colocated location with the current block i100, [1391 Referring to FIG. 1 1B, the motion vector predictor candidate of the current block 100 of the current picture 1 110 may be generated by using a motion vector of a block 1130 disposed at a coiocated location with the current block 1100 of the picture 1.4 i that tenmporally follows the current picture 1110. For example, when a motion vector my_colB of the block 1130 disposed at the colocated location withe current block 100 is generated with respect to a searched block 1132 of the picture 1112 that emporally precedes the current picture 1110, my_LOB and my VLIB, which are motion vector predictor candidates of the current block 1100, may be determined as flows: [140] my'_LOB = (t3/t4) * rev_coIB 142 Here n LOB den s a moti vector predicor candidate of the curet blok 1.0 with respect to the pictue 1112 that temporally precedes the current pitue 1110 and mvLIB denotes a motion vector predictor candidate of the current block J 1.00 with respect to the picture 11 14 that tempo ly follows the currnt picture i 14 [143] Similar to FIG. I IA. in FIG. I 1B, the current picture 1110, which is a B picture, is interposed between the picture i 12 that temporally precedes the current picture I 10 and the picture 1114 that temporally follows the current picture 11.10. Accordingly, when the motion victor mnyvcoB of the block 1130 disposed at the collocated location with the current block 1100 is generated with respect to the picture 11.12 that temporally precedes the current picture 1' 10, the motion vector of the current block 1100 may be accurately predicted based on mvLOB. In other words, the motion vector of the current block I100 may be nore accurately predicted when my conB is a motion vector in a direction illustrated in FiG. 11 B, compared with when my colE is a motion vector that is in a direction opposite to a direction illustrated in IG, i 1B, that is, when mv coiB is generated with respect to another picture after the picture i1114 that temporally folows the current picture 1110. [144] Accordingly, when a direction from the currem block 1110 to the block 1130 disposed at the collocated location wit the current block 1 100 is List 1, the motion vector my rniB of the block 1130 needs to be in a ListO direction and thus the current picture I 10 may be interposed between the picture 1112 and. the picture 1114 as il lustrated in F16 1 lB so tht the motion vector of the current bk 11 0( may be ac curately~predicted basedon nvcoB [145] Al apiturrferd t thecrent bl piure o hn ent ictu r 1112 and 114 llustraedi G B and thus anotn vector predcto candinte of unt k is nnted ba o PK [1461 For examp, when a POC of thecurn icture is CurrPOC arda POG of a picture referred ta y the cure e picture irrRPO otinvector redito candidate of the current block may be geneatedfollows [147 Scale =::CurtPOC.CtRefPOC)OlPOColRefPO [14S] myv temporal =Scale*myv_colE [149 Here, ColPOC is a POC of the picture 1I14 that temporally follows the current picture 1110 and includes the block 1130 and CoRefPOC is a P(C of the picture 1112 that temporally precedes the current picture 1110 and includes the block 1 132 referred to by the block 130. [1501 FIG. 11( illustrates a motion vector predictor candidate of a predictive picture (P picture> [151 Referring to FIG. 11(. a motion vector predictor candidate of the current block 1 t0 of the current picture 1i1.0 may be determined by using a motion vcetor of a block 140 disposed at a colocated location with the current block 1100 of the picture 1112 that temporally precedes the current picture 1110. For example, when a motion vector 20 mjyvcoC of the block 1140 disposed a the collocated lxation with the current block 1100 is generated with respect to a searched block 1 12 of a picture 1 11 that temporally precedes the curom picture 1110, mytLOC, which is a motion vector predictor candidates of the current block 1100, may be determined as follows: 1521 mevLOG = (16/t5) * mvcolC 153| As described in relation to FIGS, llA and 11 B mv_LOC may be determined based on a POC. mv_LOG may be determined bad on a POC of the current picture 1110. a FOC of a picture referred t. by the current picture 1110, a POC of the picture 1112 that temporarily precedes the current picture i 110, and a POC of the picture ii 16 that temporally precedes the current picture 1110. 1 54 Since the current picture 1 110 is a P picture, only one motion vector pedictor candidate of the current block 1100 is determined, unlike FIGS. I lA and I1B, [1551 Also, in order to use the motion vector predictor candidate generated based on a temporal distance in FIGS. I iA and 1B to predict a motion vector of the current block, information indicating which block from among the blocks 1120 and 1!130 disposed at the colocated location with the current block 1110 is used to generate a motion vector predictor may be also encoded. Such iNformation may be included in a slice header or a sequence header as a slice parameter or a sequence parameter. 1561 Overall, group C of motion vector predictor candidates according to FIGS, 10A, 10B and 1lA through I1C may be as follows: [1571 C = (median(my _aO, rmvhO, mv_cM mvaf, mv_al . mv_aN, mvob0, my11, myv_bN, my >c, mv9d, mv 'e, mev temporal} 1$581 Also, group C may be a group in which the number of mi on vector predietor candidates is reduced: 1591 C = {median(mvSa mvb, mvc), mva, mvib', mvtc, mvjemporal } [1601 Here, mvx denotes a motion vector of a block x, medianO is denotes a middle value, and mvjempora. denotes motion vector predictor candidates generated by using a temporal distance described in relation to FIGS. 11IA through 11, my a' denotes a first effective motion vector from among EvaG, mya . , and mv_aN. For example, when the block au is encoded using intra prediction and refers to the current block and other picture, the motion vector mv_aD of the block aG is not effective and thus my_a'=mv_a l Also, when the motion vector of the block al is not effective, mya=my_ a2. Similarly, my_b' is a first effective motion vector from among mvb0, nvb, . ,and mvbN and mvc' is a first effective motion vector from among mnc mvyA and ree F161] A moion vector of a block that refers to a piture othe than thd uretolc from am otion vco obcsdjacent to tcrtlk y not efficiently act amoion vector of the current black. According motionvector of a block of motion vector predictor candidates [162] When the apparatus 900 for encoding a motion vector encodes a motion vector according to an explicit mode, the apparatus 900 also encodes information indicating which motion vector predictor candidate in group C is used to predict a motion vector of a currem block, In other words, when the apparatus 900 for encoding a motion vector encodes a motion vector, te apparatus 900 allocates binary numbers to corre sponding elements of group C, that is, the motion vector predictor candidates, and encodes the binary numbers corresponding to the motion vector predictor candidate used to predict a moon vector of a current block. 163 in order to specify one of elements in group C, binary numbers each corresponding to the motion yectot predictor candidates are allocated and output. Thus, as the number of elements in group C decreases, elements in group C may be specified wih binary numrbers with lower bits, 1641 Accordingly, if there are overlapped motion vector predictor candidates in group C, binary numbers may be allocated after the overlapped motion vector predictor candidates are excluded from group C. For exmpie, when group C is C {rmedian(mta' myb', mv..c'), mv... ',.. mvh' n..c ,mv_semporail as described above and mv.a mvb and mv..c are the sume as each other, it may be detemiined that there are two elements in group C a: C = { rev.a, mv.temporal} and binary numbers may be allocated. When five elements in group C may be specified using 3 it before the overlapped motion vector predictor candidates are excluded, two elements may be specified using I bit after the overlapped motion vector predictor candidates are excluded. 65 in order to increase probability that the overlapped moon vector predictor candidate is determined as a motion vector predictor of the current block, instead of excluding he overlapped motion vector predictor candidate, a predeteNrined weight may be added. As described above, since mv. ny b and my c' are the same as each other ad only my i' included in group C, a predetermined weight is added to mv' and thus probability that mev..a' is used to predict a motion vector of a current block may increase. 61 Also, when there is one modn vector predicor candidate, a binary number ued to specify one of the moon vector predictor candidates may not be encoded. For example, when group C is C = (median(mv aG, mvb0, mvc), mv..a0 mev.at mev.aN, my bt, mev b.., ., , mvbN, mvc, my.d. mv...e, m. emporali, and the blocks aG through aN, the blocks bW through bN, the block a. the block d, and the block e are al intra predicted, group C is C = (mytemporal J and thus substantially includes one element. Accordingly, in this case, the appaatus 900 for encoding a motion vector 22 rnay not encode a binary number used to specify one of the motion vector predictor candidates. 1671 It would have been obvious to one of ordinary skill in the art that motion vectors other than motion vector predictor candidates described above mayv be used as motion vector predictor candidates. 163 In addition, according to another exemplary embodiment, the candidate determiner 920 may reduce the number of motion vector predictor candidates based on estimation of an encoding result, S691 As described above, separate information is encoded and included in a bitsuream in order to specify a motion vector predictor candidate used to predict a mUtion vector of a current block from among a plurality of rntion vector predictor candidates. Ac cordingly, as the number of elements in group C decreases, information required to specify emotion vector predictor can didate used to predict a motion vector of a current block in group C may be encoded with low bit. In this regard, the candidate determiner 920 may selectively exclude predetermined motion vector predictor candidates from among all motion vector predictor candidates by using a predetermined evaluation function, which will be described in detail with reference to FiG. 12 §171 FIG. 1 2 illustrates a method of reducing motion vector predictor candidates, according to an exemplary embodiment. 1711 In FIG. 12, it is assumed that there are three elements IMVP, MVP2, and MVP3 in group C and a motion vector of a current block is MV. Since a motion vector predictor candidate that is most similar to a motion vector of a enrrem block is used to predict the motion vector of the current block, MVP tat is lo MS ue predict the motion vector of the current block. 12] Accordingly, a vector difference (hereinafter referred to as kniginal moton vector difference') between the motion vector of the current block and a motion vector predictor candidate used to predict a motion vector of a current block is (2,0). Since MV is (50) and MVP3 is (3,0), the original motion vector difference is (2,0 §173 The candidate determiner 920 selectively excludes at least one motion vectr predictor candidate from among all motion vector predictor candidates by using the original motion vector difference and a predetermined evaluation futioion, More specifically, the original motion vector difference and the predetermined motion vector predictor candidate are used to generate a virtual motion vecror, and differences (hereinafter, referred to ss 'virtual motion vector differences') between the generated virtual motion vector and all motion vector predictor candidates are generated with respect to all candid ates, The original motion vector difference and the predetermined motion vector predictor candidate are added to each other to generate the virta motion vec and then vector differences between the generated virtual 23 motion vector and the all motion vector predictor candidates ae calculated, By comparing the original motion vector difference with the virtual motion vector dif ferences calculated with respect to all candidates, the predetermined motion vector predictor candidate may be selectively excluded from all motion vector predictor candidates. [1741 Referring to FI. 12, the candidate determiner 920 deternines whether MVP1, one of the motion vector predicor candidates is excluded from the entire candidates [151 When the virtual motion vector difference generated by subtracting a motion vector predictor candidate from a virtual motion vector based on MVP i s smaller than the original motion vector difference, MVPI may not be used to predict a motion vector of a current block, For example, when the virtual moion vector difference generated by subtracting MVP3 from a visual motion vector generated by adding MVPI and the original motion vector difference is smaller than the original motion vetor difference, MVP3 predicts the virtual motion vector more accurately than MVP1 and in this cse, MVP I can not be the motion vector predictor. [1761 in FIG. 12. when MVPi and the original motion vetor difference are added to each other the virtual motion on MVPI is (2,0). Accordingly, when the irtual motion vector is generated based on MVP!, the virtual motion vector difference for MVP2 is (2,0) and the virtual motion vector difference for MVP3 is (-,0). Here, since the virtual motion vector difference (1,0) for MVP3 is smaller than the original motion vector difference (20), MVP! may not be a motion vector predictor of a current block. Accordingly, MVP I may be excluded from all motion vector predictor candidates, In other words, a motion vector predictor candidate that corresponds to MV? 1 may be excluded fiom group C. 177 Here, the virtual motion vector difference calculated for MV Pil itself is (2,0) and Is always the same as the original motion vector difference so that the virtual motion vector difference may not be smaller than the original motion vector difference. Ac cordingly, when the virtual motion vector differences are calculated for all motion vector predictor candidates, the visual motion vector difference for MVP i itself may not be calculated. [87] When a determination on whether MVII is excluded is completed, the candidate de tem iner 920 determines wxher er MVP2 IN excluded from all motion vector predictor candidates. When MVP2 and the original motion vector difference are added to each other a virtul motin vecks base on MAVP2 is (2W),Acodnltevrulmin vector difference for MVI is (2,0) and the virtual motion vector difference for MVP3 is 1,0). Since the virtual motion vector difference for MVP3 is smaller than the original motion vector difference, MVP2 may be excluded from all motion vector predictor candidates as in MV I, When detennining whether MVP2 is excluded, the 24 virtual motion vector difference for MVP! may be selectively compared with the original motion vector difference, Since it is determined that MVPl should be excluded already, the virtual motion vector differences for the candidates other than MVFi may be compard wit the original motion vector difference. t 1 Also, the candidate determiner 920 detenuines whether MVP3 is excluded. A virtual motion vector based on MVP3 is the same as an original motion vector of the current block. Although another motion vector predictor candidate (that iN, MVPi r IVP2) is subtracted from the original motion vector, a virtual motion vector difference Lhat is smaller than the original motion vector difference may not be generated. Thus, MVP3 is not excluded from the motion vector predictor candiates. Also, according to another exemplary embodiment, since MVP3 determined to be used to predict a motion vector of a current block is not excluded from the motion vector predictor, the candidate de terminer 920 may skip determination on whether MVP3, the motion vector predictor candidate used to predict a motion vector of the current block, is excluded. [1801 Briefly, as the candidate detenniner 920 determines whether a second motion vector predictor candidate. one of the motion vector predictor candidates, is excluded, the second motion vector predictor candidate and the original motion vector difference are added to each other so as to generate a virtual motion vector and differences between the virtual motion vector and other motion vector predictor candidates are calenlated for all candidates so as to generate a plurality of virtual motion vector differences. When at least one virtual motion vector difference that is smaller than the original motion vector difference exists from among the plurality of virtual motion vector dif ferences the second motion vector predictor candidate is no a motion vector predictor of a current block and is excluded from the motion vector predictor candidates. 1)SI] Also, the candidate determiner 920 repeatedly performs determinations on exclusion for the motion vector predictor candidates so that the number of the motion vector predictor candidates, that is, elements of group C, may be reduced. According to an ar angerMen1t order of motion vector predictor candidates of group C. whether to exclude is determined in order. For example, when C = (median(myv, nmb, mvCs., mva, my, bmv..c, mvremporal), whether to exclude median(mva. mv. mv.'I is de termed and when determination is completed, whether to exclude mv.a is de termined. Then, whether to exclude mv.b' is determined. According to an arrangement order of group C, determination on exclusion is repeated until mv..temporal. 11821 When determination is repeatedly performed, comparison between the virtual motion vector difference and the original motion vector difference for candidates already excluded in the determination may be omited as described in relation to exclusion of MAP2o [183] Aso, group C mayberearraged cording to apredeemined critrion as will be 25 described later with reference to FIGS, 13A through 131), 'When gAoup C is rearranged, determination on exclusion is repeated according to a rearranged order. [184] Comparison between the virtual motion vector difference and the original rnotion vector difference described in aelion to FIG 12 rmay be applied to not only a one dimensional motion vector but aiso a two-imiiensionai motion vector. In other words, a magnitude of the virtual motion vector difference defined by x-coordinates and y coordinates is compared with a magnitude of the original motion vector difference and thus a predetermined motion vector predictor may be selectively excltided from the entire candidates. 18]However, a muagnitude used to compare the virtual motion vector difference with the original motion vector difference is only an example and various criteria may be used to compare the virtual motion vector difference with the original motion vector difference. 'When an evaluation function that generates a value for the virtual motion vector difference and a value for the original motion vector difference based on a pre determined criterion is 'A, the virtual motion vector difference may be compared with the original motion vector difference according to Equation i below: equation i 8ll A~mvx+-MVD1-myy) < A(MVL) 88] The candidate determiner 920 determines whether at least one mvyy exists in al the candidates in order to determine whether 'mvx' one of the motion vector predictor candidates, is excluded from the motion vector predictor candidates, i Equation i, MVD' denotes an original motion vector difference, in order to determine whether to exclude 'mvx' 'A(mvx+MVD-mIyy which is a value obtained by evaluating 'myx+M- VD-mvv' that is a virtual motion vector difference between 'mvx+iMVD, a virtual motion vector based on 'mvx, and nay, the other motion vector predictor candidate, by using a predetermined evaluation function 'A is calculated, and values generated as a result of calculation are compared with 'A(MVD), a value for the ot ginai motion vector difference. The motion vector predictor candidates other than mhvx' from among all candidates are repeatedly substituted for 'mvyf and whether at least one 'myvy' that satisfies Eiquation 1. exists in the entire candidates is determined, [1.8] As described above, the virtual motion vector difference and the original motion vector difference evaluated by 'A' may be defined by x-coor'dinates and y-coordina tes. in this case, an evaluation function may be defined by the sum of an x-coordinates evaluated value and a y-.coordinates evaluated value as in Equation 2 below: [1i90] (Equation 2) [191] A(p,q)= f(p)+ f(q) [192) When the virtual motion vector difference or the original motion vector difference is defined by x-coordinates 'p' and y-coordinates 'q', each coordinate value is input to a predetermined function T and die evaluation function 'A' is defined by the sum obtained by te substitution, 1193] According to an exemplary embodiment, the evaluation function 'A' in Equation 1 and Equation 2 may be an evaluation function that estimates a result obtained by entropy encoding the virtual motion vector difference and a result obtained by entropy encoding the original motion vector difference. The candidate determiner 920 estimates the result obtained by entropy encoding the Virtual motion vector difference and the original motion vector difference based on the evaluation function 'A' and the number of the motion vector predictor candidates may be reduced based on the result of estimation. This is described in detail with reference to Equation 3 below: {1 94) (Equation 3) (195) Length=1; [196] Temp = ( va <= 0) ? (-a<<1+1: (val<): [197 whie(1 = Temp ) {198] Temp >>= 1; [1991 Length += 2; (200] } 1.0 f(val) =Length [2021 A [unction 'F that estimates a result obtained by entropy encoding Ih respect to an x-coordinate value or a coordinate values may be defined as in Equation 3. When va, an x-coordinate value or a y-coordinate, is input to the function T that estimates variable length coding (for example, universe variable length coding) result, Length is calculated according to Equation 3. [2031 Equation 3 may be represented as follows: [ 2041 411 4 f(va l) [2051 The X-coordinate value or the coordinate may be a x-coordimae value or a y coordinate of the virtual moAtion vector difference or the original motion vector difference, [2061 According to Eqation 3, when 'va is a negative number or 0 val is changed to a positive number and then is rmultiplied by '2' by shifting to the left by I bit. And '1' is added, thereby storing the result in Temp When vaR is a positive number,'vaY is multiplied by '2' by shifting to the left by 1 bit, thereby storing the result in 'Temp 27 Then, 'while' loops are repeated unt 'Temp is 1, and 'Length is calculated, [207] For example, when the virtual motion vector difference or the original motion vector difference is (2,0), A(2,0) = (2)+ f(0), [208] f(2) is calculated as follows. '2' of f(2) is a positive number and thus is shifted to the left by 1 bit so that 'Temp is set to '4. In a first 'while' loop, 'Temp' is '4, which is not T, and thus '4 is mutipied by '1/2' by shifting to the right so that 'Temp is set to '2' Since an initial value of 'Length' is set to 1', 'Length' in the first 'while' loop is '3. [209] In a second 'while' loop, Temp' is '2' which is not 'l , and thus '2' is rnitipied by '1/2' by shifting to the right so that 'Temp' is set to T. Since a current Length' is 3, Length' is '5' in the second 'while loop. A third 'while' loop is not performed since 'Temp is 'I, and f(2) is '5 12101 f(0) is calculated as follows. Sice an input coordinate valie of f(0) is 'o :' is shifted to the left by I bit and '' is added so that Temp is set to T, Accordingly, wil' loops are not peformed f0) is 'TF according to an initial value of 'Length 2 The predetermined evaluation value 'I described in relation to Equation 3 is a fnction for estimating a result of entropy encoding using a variable length coding. Ac cordingly, the candidate determiner 920 estimates a result obtained by variable length coding the virtual motion vctor differences by using the evaluation function 'A in order to determine whether 'myvx' is excluded from the motion vector predictor candidates. As a result of estimation, when at least one virtual motion vector difference estimated to he encoded by a shooter length than the original motion vector difference exists, 'niyx' is excluded from all motion vector predictor candidates. however, it would have been obvious to one of ordinary skill in the art that an entrpy encoding resul is estimated by using methods other than the variable length coding resul. For example, another evaluation function 'h' may be used to estimate and compare an entropy encoding result of the virtual motion vector difference and an entropy encoding resut of the original motion vector difference. Here, 'h' may be a union that estimates a result of context adaptive binary arithmetic coding. [2131 Also, according to another exemplary embodiment in order to increase accuracy of an evaluation result based on a predetermined evaluation fuCtion, a resul obtained by evaluating index information may be also estimated. The index information i used to specify a predeternminted motion vector predictor candidate from among al motion vector predictor candidates. This is described in detail with reference to Equation 4: [214] (Equation 4) [215] A(mvx+MVD-mvy, mvyLdx) < A(MTVD, imvxldx) [216] The candidate determiner 920 determines whether at least one 'mvy' that satisfies Equation 4 exists in the entire candidates in order to determine whether 'mvx' one of the motion vector predictor candidates, is excluded from the motion vector predictor 23 candidates. In Equation 4, 'MVD denotes an original motion vector difference, and mvxldx and mvyidx denote index information used to specify 'mv:' and 'mvy re spectively from all motion vector predictor candidates, in order to determine whether to exclde nvx 'mvyx+MVD-myy whi is a virtual motion vector difference between 'myxMMD', a visual motion vector based on 'mvx. and 'my the other motion vector predictor candidate, and index information for specifying 'myvy' from among the all candidates is evaluated by using a predetermined evaluation function 'A' Also, the original motion vector difference and index information for specifying 'mux' from among the entire candidates is evaluated by using a predetermined evaluation function 'A' As a result of the evaluation, whether at least one 'hyy exists in the entire candidates is determined. [2173 As described above, the virtual motion vector difference and the original motion vector difference evaluaed by 'A' may be defned by x-coordinates and y-coordinaes and may be defined as in Equation 5 below: [218] (Equation 5) [2i Amvx+iMVD-myy,rmvyldx) = p1) +f11)+ g(myyiax) [2201 AIM VD, mvxidx) =42) + f(q2} + gmvxidx) [2211 in comparison with Equation 2, A(myx+MVD-mvy) i the left side of Equation 2 univ evaluates :he virtual motion vector difference, whereas Anm+MVD-myy my dx) of Equation 5 evaluates the virtual motion vector difference and information for specifying 'myyv from all motion vector predictor candidates. The evaluation function 'A' may be a function for evaluating an entropy encoding result. Here, he functin '' may be a function for estimating an entropy encoding result based on an X coordinates value or a y-coordinates value of the virtual notion vector difference as described in relation to Equation 2 and the function 'g' may be a function for es timating an enropy encoding result of 'mvxldx When an x-courdinates value and a coordinates value of 'mvx+MVDmvy' are 'p and ' respectively, A(mvx+MV D-myy mvxldx) may be calculated as in Equation 5. [222 AMVD) in the right side of Equation 2 only evaluates the original motion vector difference, whereas A(MVD, mvxIdx; of Equation 5 evaluates the oriai motion vector difference and informaion for specifying 'mvx' from all moion vector predictor candidates, The function 'f' may be a function for estimating an entropy encoding resuh based on an x-coordinates value or a coordinates value of the original motion vector difference as described in relation to Equation 2 and the function 'g may be a function for estmting an eniy encodingrsul m an vau and a y odinates ;au D r p and q2' pec a(MVD mvxidx) may be calculated as in Equation 5. F223] Determination on exclusion according to Equations 4 and 5 may be suppiementarily 29 used in determinaion on exclusion according to Equation 2. In otter words, whether 'mvx' is excluded from ail motion vector predictor candidates is firstly determined based on Equation 2, and exclusion nmay be again determined according to Equations 4 and 5. For example, as a result of detemination according to Equation 2, when A(mvx+MVD-mvy)' is same with or greater than 'A(MVD) and A(mvx+MVD-nVyy) is not sraller than 'A(MVDC, 'nvx' is not excluded from all motion vector predictor candidates according to Eqnation 2. However, alhogh m~rnvx+1VD-mvy)' and 'A(MVD)' are the same each other, 'mvx' nay be excluded from all motion vector predictor candidates based on the reslt ot determi .atun according to Equations 4 and 5. ;224 When the candidate determiner 920 determines on exclusion for the motion vector predictor candidates based on Equations I through 5, determition on excusion for the motion vector predictor candidates is repeatedly performed according to an ar pangement order of group C. According to another exemplary embodiment, the candidate determiner 920 rearrunges group C according to a predetermined criterion and determination on exclusion may be repeated according to a rearranged order, This is described in detail with reference to FlGS. 13A through 13D, 2251 FIGS. 13A through 13D llusrate a location of a current block included in a coding unit having a predetermined size, according to an exemplary embodiment. [226 When all motion vector predictor candidates are C = [median(mva, rev_b mv.., mnva mb nmv... nmv..temporal), binary numbers are each allocated to lie motion vector predictor candidates of group C and thus a motion vector predictor candidate used to predict a moton vector of a ceu t bk may be pected fo among the motion vector predictor candidates, as described above. 22] Here, the binary numbers are allocated according to an arrangement order of the motion vector predictor candidates included in group C and may be a variable length code based on a Huffmnan code. Accordingly, a small number of hits may be alocated to the motion vector predictor candidates disposed at the font in an arrangement order of group C. For example, '0' bit may be allocated to 'm0ediantmv§ mv. nwrav , bit may be allocated to mv.a and 0' bit may be allocated to rv. Accordingly, the candidate determiner 920 arranges the moon vector predictor candidates according a. a predetermined order so that a motion vector predictor with high possibility to e used iredicngamotion vector of a currentock from mongthe moin vtor predictor candidates located tthe frontin group % [2281 The motion vector predictor with high possibility to be used in predicting a motion vector of a current block may be determined according to a location of the current bloxk in a coding unit. The location of the current block may be expressed by using a partition index of the current block. Partition idexes are located to blocks in the 30 coding unit according to a predetermined order, Thus, the motion vector predictor with high possibility to be used in predicting the motion vector of the current block may be determined according to the partition index of the current block. As in TAG. 13A, when a current block is located at a lower side of the codin unit, a moton vector of the current block may be the same as or similar to a motion vector of a block adjacent to a left side of the coding unit or a motion vector of a block adjacent to a lower left side of the coding unit, Accordingly, an arrangement order may need to be changed so that the motion vector predictor candidate that corresponds to the motion vector of the block adjacent to the left side or the motion vector of the block adjacent to the iower left side is located at the from in group C. Since mv.' from among the motion vector predictor candidates in group C is a tnioin vector predictor candidate that conesponds to the motion vector of the block adjacent to the left side, order of my h and nmediannav mv. rm'') is changed in group C and thus group C may be arranged as in C = (my .If my..a, m-edianfmvtW mvt',mv mv mv e mvx K itemporaiJ. [2291 Simiarly, when a current block is located at a left side of the coding unit as in FIG. 133B a motion vector predictor candidate that coresponds to a mtion vector ot a block adjacent to a left side of the coding unit and a motion vector of a block adjacent to an upper side of the coding unit may be used to predict a motion vector of the current boHxk Since mv b from among the monon vector predictor candidates in group C is a moon vector predictor candidate that corresponds to the motion vector of the block adjacent to the left side, order of mv.b' and median~mv a' mv...b, mvyc) is changed in group C and thus group C may be arranged a n C = ri mv~' miedimanvremy', mv', my cn3 mv...emporaib, [230] As in FIG, 13C, when a current block is located at an upper side of the coding unit, a motion vector predictor candidate that corresponds to a motion vector 0o1 a block adjacem to a left side of the coding unit and a motion vector of a block adjacent to the upper side of the coding unit may be used as a motin vector predictor of the current block. Since my a' from among the motion vector predictor candidates in group C is a adjacent to the upper side. order of my a and median(mva mvb mv..c is changed in group C and thus group C may be rearranged as in C = {mva, median.mv, m_bi' mvcDeiy', my_c'.mtemporai} [23 As in FIG. 13D, when a current block is located at a rightside of the coding nit a motion vector predior candidate that coresponds o a motion etor of a bloc adjacent to an upper side of the coding unit may be used to predict aa oion vector of the currentblocSince my from among the motion vector predictorandidates in rou C is a motion vector pedictor candidate that corresponds to a motion vectorof a block adjacent an upper right side, order of mwC andenxedian nt"a', yU 3nmsc 3 1 is changed in groppC and thus group C may be rearranged as In C = T, m nmv~b' medianhnvyanwtb',is) nmvtemporalj. 2321 A location rentack in a coding unit as a criterion usedtorearrange the matio vetor predictr candidates is an example n other words, various criteria may be used to rearrange the motion vector predictor candidate, Vrious criteria used to arrange a motion vector predictor candidate with high possibility that it is similar to the motion vector of the current block at the front in group C may be used as criteria for rearranging the motion vector predictor candidates. The motion vector predictor candidate with high possibility that it is similar to the motion vector of the current block is determined based on predetermined information mlated to other blocks encoded before the current block and group C may be rearranged according to the de 2] Also, the motion vector predictor candidate with high possibility that it is similar to the motion vector of the current block is determined based on other information encoded or decoded with aspect to the current lock before the motion vector of the current bock i encoded, and grup C may be rearranged according to the deter mination. 234] In addition, group C may be rearranged by excluding overlapped motion vector predictor candidates, When there are overlapped motion vector predictor candidates in all motion vector predictor candidates the overlapped motion vector predictor candidates are firstly excluded and whether to exclude the motion vector predictor candidates may be determined according to Equations I through i. 12351 Referring back to FlC 9, the motion vector encoder 930 encodes information about a moon vector and information about a motion vector predictor. The information about a moon vector is an original moion vector difference between an original moion vector of a current block and a original motion vector predictor. The information about a motion vector predictor is information for specifying a motion vector predictor candidate used to predict a motion vector of a current block in the motion vector predictor candidates from which at least one moon vector predictor is excluded. in other words, information for specifying a motion vector predictor of a current block in tbe moon vector predictor candidates that are not excluded in the candidate de termuiner 920 is encoded as information about a motion vector predictor. [230] The original motion vector difference is received from the motion vector estimator 910 and is encoded according to a predetermined entropy encoding method, Also, in formation for specifying a motion vector predictor candidate used to predict a motion vector of a current block from among the moon vector predictor candidates de ermined by selectively excluding at least one motion vector predictor candidate in the candidate determiner 920 is encoded.
~23] When the candidate determiner 920 determines the motion vector predictor candidates by excluding at least one motion vector predictor candidate fo m among all moon vector predictor candidates according to Equations I through 5, informion for specifying the motion vector prediC or candidate used to predict a motion vector of a cIOent block from among the determined motion vector predictor candidates is encoded. The motion vector encoder 930 may index each of the motion vector predictor candidates that are not excluded in the candidate determiner 920 and entropy encodes index information as information about a motion vector predictor. Indexing denotes allocatng predetermined binary numbers to each of the motion vector predictor candidates and the information about a notion vector predictor denotes binary number allocated to a motion vector predictor candidate used to predict a moton vector of the cunent block, When one motion vector predictor candidate remains after the candidate determiner 920 selectively excludes at least one motion vector predictor candidate, information about a motion vector predictor may not be separately encoded in the motion vector encoder 930, as the motion vector ptedictor candidate to be used to predict a motion vector of a current block is implicitly de termined. 3 Also, as described in reaction to FIGS. IA through E3DW the candidate determiner 920 may index each of the motion vector predictor candidates generated by rearranging all motion vector predictor candidates according to a predetermined criterion and by seleetvely exchiding at least one motion vector predictor from among all the re arranged motion vector predictor candidates and may entropy encodes index in formation. [239 As a result of rearrangement performed by the candidate determiner 920, binary numbers with the smallest bit number may be allocated to the motion vector predictor candidate with high possibility to be used to predict a moon vector of a current block and thus information about a motion vector predictor raay be encoded with high com pression ratio. [240] FlG. i4 is a block diagram of an apparatus 1400 for decoding a motion vector, according to an exemplary embodiment, [211] The apparatus 1400 for decoding a motion vector of FG. 14 is included in the apparatus 200 tOr decoding an image of F1G. 2 or the image decoder 500 of FC. 5. Referring to FIG, 14, the apparatus 1400 for coding a motion vector includes a motion vector decoder 1410, a candidate determiner 1420, and a motion vector restoring unit 1 430. 021 The apparaus 1400 for decoding a motion vector decodes a motion vector of a currem block when the motion vector of the current block is encoded according to the explicit mode frOm among the explicit mode and the implicit mode described above.
243] The moon vector decoder 1410 receives a bitstream for the moton vector of the current biock and decodes the received bitstream, Information about the motion vector included in a bitstream is decoded. An original moon vector difference of the current block is decoded. The original motion vector difference may be decoded according to a predetemined entropy decoding method . The original motion vector difference is a vector difference between the motion vector of the current block and a motion vector predictor candidate used to predict the motion vector of the currm block, In a method of decoding a motion vector according to un exemplary embodiment, at least one motion vector predictor candidate is excluded from among all motion vector predictor candidates according to Equatons 1 through 5 and the motion vetor predictor candidates are determined. The motion vector predictor candidates are not fixed and are decoded in a block un it so that the motion vector predictor candidates may be co~n inuously changed. Accordingly, although information about the motion vector predictor candidates is the same, if the motion vector predictor candidates are nor de termined, the rmoton vector predictor candidate used to predict a motion vector of a current block may not be accurately restored, 244 Thus, the candidate determiner 1420 determines the moton vector predictor candidates before the motion vOector predictor candidate used to predict a motion vector of a current block is determined. At least one motion vector predictor candidate from among all moon vector candidates is selectively excluded according to Equadons 1 through 5 and the motion vector predictor candidates are determined. A motion vector predictor candidate that is not used to predict a motion vector of a current block from among al candidates determined based on motion vectors of blocks included in a previously encoded area adjacem to the current block is excluded bed d on a prede termined evaluation function. [4] A virtual motion vector is generated based on a predetermined motion vector and from among all motion vector predator candidates and information about a motion vector predictor decoded in a motion vector decoder, and virtual motion vector dif ferences, which are differences between the generated virtual motion vector and other motion vector predictor candidate, are calculated for al candidates. The calculated virtual motion vector differences are compared with information about a. motion vecto decoded in the motion vector decoder 1410, that is an information about the original motion vector difference, and the predetermined motion vector predictor candidate is selectively excluded. A resul obtained by entropy encoding the virtual motion vector differences is compared with a result obtained by emropy encoding the original motion vector difference so that whether to exclude a predetermined motion vector predictor candidate may be determined. Also, in order to increase estim'ion accuracy of an entropy encoding result, a result obtained by entropy encoding index information is 34 also estimated and is used in determination on excion. The method of excluding the motion vector predictor candidates is described in relation to Equathos I thr ogh 5, p246] Also, according to another exemplary embodiment the candidate determiner 1420 rearranges all motion vector predictor candidates according to a predetermined criterion, repeatedly performs determination on exclusion for all rearranged motion vector predictor candidates according to Equations i through 5. and may selectvely exclude at least one motion vector predictor candidate. Overlapped motion vector predictor candidates may be excluded from all rearranged notion vector predicor candidates and determination on e:chsion may be repeatedly perormed according to Equations 1. through 5. [247| W hen a plurality of motion vector predictor candidates from among all motion vector predictor candidates remain after the candidate determiner 1420 excludes at least one motion vector predictor candidate from among all motion vector predicor candidates the motion vector decoder 1410 decodes information about a motion vector predictor. The information about a motion vector predictor is decoded according to a prede ternnned etoopy deciding method. The information about a motion vector predictor is information tr specifying a motion vector predictor candidate used to predict a monon vector of a current blok from among the motion vetor predictor candids fO which at least one motion vector predictor candidate is excluded. Information for specifying a motion vector predictor candidate used to predict a motion vector of a current block from among the notion vector predictor candidates that are not excluded n he c ndidate determi ner 1420 i~s decoded. (2481 When one motion vector predictr candidate remains after the candidate determiner 1420 excludes at least one motion vector predictor candidate from among all motion vector predictor candidates, the remained moon vector predictor candidate is used to predict a motion vector of a current block and thus the motion vector decoder 1410 may not separately decode information about the moton vector predictor candidate. i249j The motion vecor restoring unit 1430 restores a motion vector of a current block based on information about a motion vector decoded in the motion vector decoder i410. The original motion vector difference decoded in the motion vector decoder 10 and the motion vector predicow candidate used to predict a motion veclor o a current bok are added to each other so as, to restore a motion vector of a current block. A motion vector predictor candidate to be used in predicting a moTon vector of a current blok is determined from 'mng the motion vector predictor candidates de termined in the candidate determiner 1420 and the determined motion vector predictor candidate is added to the original motion vector difference, When a plurality of motion vector predictor candidates reman instead of one motion vector predictor candidate, as a result of exlIusion in the candidate determiner 1420, the motion vector predictor 35 candidates used to predict a motion vector of a current block macy be determined based on information about a motion vector predictor decoded in the motion vector decoder 141.0 [2501 Sine the motion vector predictor candidates are determined by the candidate de terminer 1420, even if decoded information about a mo-ion vector predictor is the same, the motion vector predictor candidates used to predict a motion vector of a crent block may be motion vectors of adjacent blocks in different locations. 25 1 G, I5 is a flowchan illustrating a method of encoding a motion vector, according to an exemplary embodiment. 12,21 Referring to FI, 15, in operation 1510, an apparatus for encoding a motion vector estmates a motion vector of a current bLook and determines a motion vector predictor candidate used to predict the motion vector of the current block from among all motion vector predictor candidates. A block that is the same as or similar to the current block is searched in a pluraity of reference pictures and as a result of searching. a motion vector is estimated, that is a relative location difference between the current block and a reference block. 12531 Then, the motion vector of the current block is predicted based on motion vectors o1 blocks included a previously encoded area adjacent to the current biock. in other words, the motion vectors of the blocks included in the previously encoded area adjacent to the current block are set to be motion vector predictor candidates, and a motion vector predictor candidate that is most similar to a motion vector of an estimated current block from among the motion vector predictor candidates is de nermined. A vector difference between the motion vector of the current block and the determined motion vector predictor candidate, that is an original motion vector difference, is generated. [254] In operation 1520, the apparatus for encoding an image selectively excludes at least one motion vector predictor candidate from among all motion vector predictor candidates. A motion vector predictor candidate that is not used to predict the motion vector of the current block is excluded from among all motion vector predictor candidates. 1255] The apparatus for encoding an image generates a vintaa motion vector by using a predetermined motion vector pedictor candidate fron among al motion vector predictor candidates and the original motion vector difference generated in operation 1510. The generated virtual motion vector and other motion vector predictor candidate are used to generate a virtual motion vector difference, The yrtual motion vector dif ferences for each of all motion vector predictor candidates are generated, and tie generated visual motion vector differences are compared with the original motion vector difference so that the predetermined motion vector predictor candidate may be 36 selectively excluded. [256] A pr! css of generating the virtual motion vectoand selectively excluding the motion vector predictor candidate in operation 1.520 is repeatedly performed for al' candidates and tus at least one motion vector predictor candidate m'ay be excluded fhrn all candidates. When an excluding process is repeatedly performed, the virml motion vector differences for motion vector podictor candidates other than motion vector predictors that are already excluded are calculated and the calculated virtual motion vector differences may be compared with the original motion vector difference 257 The virtual motion vector difference and the original motion vector difference may be evaluated and compared with each other based on a predetermined evaluation function, wherein the predetermined evaluation fuction may be a function that estimates an entropy encoding result. The virtual motion vector difference and the original motion vector difference rmay he compared with each other based on a function that estimates a result obtained by entropy encoding the virtual motion vector difference and a result obtained by entropy encoding the original motion vector difference. Also, in order to increase evaluation accuracy, a result obtained by encodmg idex information may also be estimated and used in determination on exclusion. The method of excluding at least one motion vector predictor candilate from among all motion vector predictor candidates is described in relation to Equations I through 5. [258] Also, as described in relation to FIGS. 13A through 13D, the apparatus for encoding a motion vector may rearrange all moion vector predictor candidates according to a predetermined critenon and selectively exclude at least one motion vector predictor candidate fom among all rearranged motion vector predictor candidates, In addition, the apparatus for encoding a motion vector may exclude overlpped motion vector predictor candidates from among all rearranged morion vector predictor candidates and repeatedly peiform determinaion on exclusion according to Equations I through 5. [2591 In operation 1530, the apparatus for encoding a motion vector encodes information about a motion vector and information about a motion vector predictor. Information about the original motion vector difference and information for specifying a motion vector predictor candidate used to predict the motion vector of the current block are encoded. The information about a motion vector predictor an y be information for specifying a motion vector predictor candidate used to predict a motion vector of a current block from among the motion vector predictor candidates that are not excluded in operations i520 and 1530. 2601 When one motion vector predictr candidate remains aftetleast one mton vector predictor candidate is excluded from among all moton vector prediR candidateshe information about a motion vector predictor may not be encode 37 2611 FIG, 16 is a flowchart illustrating a method of decoding a motion vector, according to an exemplar embonnent [22 Referring to FIG. 16, an apparatus for decoding a motion vector decodes information about a emotion vector of a current block from a received bitstream, in operation 1610. The information about a motion vector may be an original motion vector difference between a motion vector of a current block and a motion vector predictor of a current block. [263]| in operation 1620, the apparatus for decoding a motion vector generates a virtual motion vector based on the information about a motion vector decoded in operation 1610 and one motion vector predictor candidate from among motion vector predictor candidates. [2641 When the virtual motion vector is generated, the apparatus for decoding a motion vector excludes at least one motion vector predictor candidate from among all motion vector predictor candidates, All motion vector predictor candidates are determined based on motion vectors of blocks included a previoiuv decoded area adjacent to the current block. The apparatus for decoding a motion vector may selectively eceude at least one motion vector predictor candidate from among all motion vector predictor candidates. A virtual motion vector difference and the onginal motion vector difference decoded in operation 1610 are evaluated based on a predetermined evaluation function so as to selectively exclude a predetermined motion vector predictor candidate. The method of excluding the motion vector predictor candidate from among all candidates is the same as operation .1530 and is described above with reference to Equations I through 5. [2653 A process of generating the virtual motion vector and selectively excluding the moon vector predictor candidate in operaion 1620 is repeatedly performed for all candidates and thus at least one motion vector predictor candidate may be excluded from all andidates. [2661 Also, as described in relation to FIGS, 13A through 3D, the apparatus for decoding a motion vector may rearrange al orntion vector predictor candidaes according to a predetermined criterion and selectively exclude at least one motion vector predictor candidate from among all rearranged motion vector predictor candidates, in addition, the apparatus for decoding a motion vector may exclude overlapped motion vector predictor candidates from among all rearranged motion vector predictor candidates and epeatedly perform determination on i according to Equations I through 5. [267] When a plurality of moton vector predictor candidates remai as a result of exclusion, the information about a motion vector predictor is decoded and when one motion vector predictor candidate remains, the information about at motion vector predictor is not decoded.
30 26$] In operation 1630s the apparatus for decoding a motion vector determines a moon vector predictor used to predict a motion vecAr of a current block from among the motion vector predictor candidates that are not excluded in operation 1620, [2691 The motion vector predictor candidate used to predict a motion vector of a current block from among the motion vector predior r candidates may be determined based on information about a motion vector predictor of a current block. When one motion vector predictor candidate remains as a result of exclusion in operation 1620, one remaining motion vector predictor candidate is determined as a motion vector predictor used to predict a motion vector of a current block. 2701 When the motion vector predictor candidate is determined, a motion vector of a current block is restored by adding the determined motion vector predictor candidate and the original motion vector difference decoded in operation 1610. [271 According to an exemplary embodiment, when the motion vector predictor Landiates are used to predictive encoding and decoding a motion vector, the number of the motion vector predictor candidates may be reduced to predictive encode and decode a motion vector, Accordingly, information required to specify the motion vector predictor used to predict a motion vector of a current block from anong the motion vector predictor candidates may be encoded with a minimum bit so that com~ pression ratio of encoding/decoding a motion vector is increased and thereby, con pression ratio of encoding/decoding an image may be improved. [272] While exemplary embodiments have been particularly shown and described above, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing frm the spirit and scope of the present inventive concept as defined by the following claims. 1273] Exemplary embodiments can also be embodied as computer readable codes on a computer radable recording medium. [274] For example, the apparatus for encoding an image, the apparatus for decoding an mage, the motion vector encoder, and the motion vector according to exemplary em bodiments may each include a bus coupled to each element included in the appa atuses in FIS. 1, 2, ei, 5, 9, and 14 and at least one processor combined to the bu. Also, the apparatuses may each include a memory coupled to at least one processor for executing commands by being combined to the bus in orner to store the commands, received messages, or generated messages. [275 'The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system, Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD RGMs, magnetic tapes, floppy disks, and optical data storage devices. The computer readable recording medium can also he distributed over $9 network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion [2% Where the terms "comprise", "comprises", "comprised" or "comprising" are used in this specification (including the claims) they are to be interpreted as specifying the presence of the stated features, integers, steps or components, but not precluding the presence of one or more other features, integers, steps or components, or group thereof.