[go: up one dir, main page]

GB2362483A - Method and apparatus for calculating the magnitude of a task - Google Patents

Method and apparatus for calculating the magnitude of a task Download PDF

Info

Publication number
GB2362483A
GB2362483A GB0011728A GB0011728A GB2362483A GB 2362483 A GB2362483 A GB 2362483A GB 0011728 A GB0011728 A GB 0011728A GB 0011728 A GB0011728 A GB 0011728A GB 2362483 A GB2362483 A GB 2362483A
Authority
GB
United Kingdom
Prior art keywords
data
task
local factors
code
expected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0011728A
Other versions
GB0011728D0 (en
Inventor
David Michael Victor
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0011728A priority Critical patent/GB2362483A/en
Publication of GB0011728D0 publication Critical patent/GB0011728D0/en
Publication of GB2362483A publication Critical patent/GB2362483A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stored Programmes (AREA)

Abstract

A method is provided for calculating an expected time for writing software for performing a task. The method comprises under the control of a program stored in a data processor:<BR> ```entering into said data processor data that defines the functional content of the task;<BR> ```entering into said data processor data that defines local factors relating to the carrying out of the task;<BR> ```calculating by means of said data processor from the functional content data and the local factors data the number of lines of code that the software is expected to contain;<BR> ```calculating by means of said data processor from local factors data and the expected number of lines of code the expected time to write the software; and<BR> ```providing an output indicating at least the expected time. In tests the method has been found to provide good predictions of the actual time required for particular programming tasks. A program for carrying out the method may be stored on a magnetic or optical disc or may be supplied as a signal containing one or more digital files for down-loading via a local network or via the Internet.

Description

2362483 METHOD AND APPARATUS FOR CALCULATING THE MAGNITUDE OF A TASK
FIELD OF THE INVENTION
This invention relates to a method and apparatus for predicting the time it will take to write code for performing a software task, which is useful in the management of program development. It also relates to a magnetic or optical disc or signal in which instructions for carrying out the above method are stored.
BACKGROUND OF THE INVENTION
Various proposals have been put forward for managing program development, and an example of relatively recent thinking is provided by US-A- 5878262 (Shoumura, assigned to Hitachi Software Engineering Co) and is based on the idea of providing a resource file database and link information. Although such a system will speed up the task of software development, it does not provide an indication of how much work is involved in the writing of a particular new program and how long it is expected to take.
Function point analysis is a technique that was first proposed by Allan J Albrecht in the late 1970's and has been explained by Capers Jones, 'Sizing-up Software', Scientific American, December 1998. It provides a means for estimating the size of a software project or of an application developed or enhanced by a software project based on the user's view of the functional requirements of the application, but without regard to the technology, design tools or language used. Five basic functions are recognized which may be classified as data functions or transactional functions as follows:
Data functions - Internal logical files (ILF) - i.e. logical groupings of data in a system maintained by an end user.
External interface files (EIF) - i.e. logical groupings of data maintained by other users or systems and used only for reference purposes Transactional functions - 0 External inputs (El) - which may be used to add, change or delete information from an ILF.
External outputs (EO) - in which maintained or referenced data is retrieved and/or manipulated to produce an output.
External inquiries (EQ) - in which an output is produced by the direct retrieval of stored information.
Additionally function point analysis recognizes two adjustment factors:
Functional complexity - for each function the number of data elements and unique groupings is counted and compared to a complexity matrix for that function so that the function can be rated as of low, medium or high complexity depending on a count of data element types and file types referenced.
Value adjustment factor - the unadjusted function point count is multiplied by the above factor which takes into account the technical and operational characteristics of the system and is calculated in response to inputs relating to Data communications Distributed data processing Performance Heavy use Transaction rate Online data entry End-user efficiency On-line update Complexity of processing Re-usability Ease of installation Ease of operation Multiplicity of sites Facilitating change.
The result of the analysis is an adjusted number of function points that measure the complexity of the software independent of technology or system. Function points are a better measure of inherent complexity than lines of code because different computer languages require different lengths of code to specify the same operation. As indicated in the Scientific American article, the expense of producing software can be compared in terms of function points independent of machine, operating system and language. In an article on the Internet by Linda Smith of Predicate Logic, Inc, "Function point analysis and its uses", (www.predicate.com" p f iw _ p.html), the stated benefits of function point analysis are that it:
0 0 Measures objectively, consistently and can be audited.
0 Normalizes data for comparison between projects, applications and organizations. Provides one size measurement for all types of applications and businesses. Makes size available early in the project life cycle. Is easily understood, applied, used and obtained. Represents what is delivered to the customer. Provides a basis for communication with the customer.
It will be appreciated that the emphasis of the work to date on functional point analysis has been towards abstract measurement and an idealized measure of productivity. However, it is often required to predict how particular people with individual skills and experience and possibly with gaps in their pre-existing knowledge required for the task will perform an actual task constrained e.g. by the requirement to use a particular language and to work with a pre-existing computer system and software. If the time for the task is under-estimated, then it will overrun, whereas if it is over-estimated, highly skilled people may be under-utilized.
In the inventor's opinion, managers in the software industry almost invariably treat task estimation casually, largely because they do not have effective techniques at their fingertips, with inevitable consequences. Slippage is insidious - it sneaks up one day at a time. If ten staff all lose just one day in a single week, a fortnights-worth of extra effort has to be found and paid for. But could the problem be one of faulty perception? Suppose the estimates were seriously adrift, that they did not represent what was realistically attainable. Would it then be fair to criticize people for incurring slippage and to put them under pressure to make up the deficit? Of course not, but that is exactly what happens, with inevitable results: morale takes a knock, comers are cut, quality suffers and errors are introduced which will cost additional time and money to rectify further down the line. And meanwhile managers continue to lurch down the highway of rushed development, watching the signposts change from Estimate to Deadline to Ultimatum to Inquest. The situation just described is endemic in the profession of software development even when task specifications are paragons of completeness and clarity and the staff are competent seasoned professionals. Slippage accrues in small increments, unnoticed until people suddenly realize that they are two weeks short of meeting tomorrow's milestone.
- SUMMARY OF THE INVENTION
It is an object of the invention to provide a method and apparatus for automated prediction e.g. at task level of the time required to complete particular projects based on actual resources available. Such a method and apparatus address a crucial element of the endemic estimation problems of the IT industry, i.e. unreliable and inconsistent prediction at task level.
The present invention provides calculating apparatus that uses questionnaire- style panels to feed algorithms that generate two essential indicators for a proposed task:
size: taking account of the programming language in which the task will be carried out, the task's functionality and its complexity; development effort with due regard to the experience and knowledge of the staff assigned to the task.
The above apparatus can provide an output in which Size is expressed in lines of code (LOC) and Development Effort is expressed as a time period e.g. days.
In one aspect, the invention relates to the use of function point analysis to derive the number of lines of code for carrying out a predetermined task in a particular language, and thence an expected time to write the code.
The invention also provides a method for calculating an expected time for writing software for performing a task, said method comprising under the control of a program stored in a data processor: entering into said data processor data that defines the functional content of the task; entering into said data processor data that defines local factors relating to the carrying out of the task; calculating by means of said data processor from the functional content data and the local factors data the number of lines of code that the software is expected to contain; calculating by means of said data processor from local factors data and the 5 expected number of lines of code the expected time to write the software; and providing an output indicating at least the expected time.
The invention also relates to software for carrying out the above method stored or carried by an optical or magnetic disc or as a signal.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figs 1-6 are screens that appear during the running of the estimation program, Fig 1 being for general data entry, Fig. 2 being for developer details, Fig. 3 being for data access, Fig. 4 being for data manipulation, Fig. 5 being for program logic and Fig. 6 is for other factors and also shows at the right hand of 20 the screen the result that appears when data entry has been completed.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
The present task estimator ('SPECTRE) works in two phases:
Function Evaluation Development Effort Calculation.
PHASE 1 - Function Evaluation The starting-point is Function Point Analysis (FPA) which as previously explained is a soundly-based method of quantifying task functionality numerically, derived over many years from industry-wide analyses of software development practice using a variety of programming languages (although debate continues concerning the precise definition of a Function Point, it is nonetheless the defacto industry yardstick for software measurement). The rationale of FPA is that a known relationship exists between Function Points and LOC based on the relative power of different programming languages. FPA is traditionally used to measure the size of software in order to compare and benchmark work produced, e.g. the 'Backfire' method for completed projects that quantifies functional content by counting LOC. However, SPECTRE reverses the direction of the Backfire process: it estimates LOC based on specified functional content. The following table shows a few languages for illustration: the higher the position the lower the number of LOC required to generate one point's-worth of function.
1Z RELATIVE POWER OF PROGRAMMING LANGUAGES RPG11 Access Foxpro Delphi V136 HTML Function evaluation recognizes two classes of function:
Input/Output Processing.INPUT/OUTPUT.
Analysis of the task specification identifies its input/output operations.
SPECTRE currently recognises:
read from a file write to a file delete from a file update a file select from a table insert into a table delete from a table update a table interface with another program generate a report generate a display 0 Each input/output operation in the task carries its own Function Point cost. These are accumulated, with repeated occurrences of a function shaded down in value (to take account of the probability of shared code for, e.g., error and exception handling) to give a Function Point Total for the task.
The following table shows the Function Point cost of input/output operations; the higher the position in the table, the greater the number of Function Points. Please note that the scale shown reflects relative, not actual, Function Points. Note also that the values indicated refer to basic 'function templates' held intemally; the preliminary LOC count thus calculated for the task is later adjusted in the light of the task's complexity (see below).
RELA TIVE FUNCTION POINT COST OF I10 OPERA TIONS Input Output Update Interface Report Display s PROCESSING The following components of the processing logic are identified and evaluated:
data restructure conditions linkages with other programs data retrieval calculations sorting data presentation testing considerations performance considerations 0 0 0 0 Each applicable feature is given a weighting to represent its degree of complexity, from which a Complexity Coefficient is derived for the task. LOC is calculated as The Function Point Total times the LOC for one point's- worth of function in the programming language to be used, and the Complexity Coefficient is applied to this product. The following table shows the relative complexity weighting of each of the above features; the higher the position in the table, the greater the weighting.
RELATIVE COMPLEXITY OFPROCESSING FEATURES 0 Data RestructurcJC',ad"on"Lik'gu DData RetiCalladons 0 9 1 M Data PresentanoniTesfinRIPerfb- ---ce Complex Average Simple PHASE 2 Development Effort Calculation Development Effort Calculation takes into account the grade and experience of the task assignee. Allowance is also made for the possible re-use of existing code and/or program design (all these factors are explained in further detail below). The result is an estimate of the number of days' effort that should be required from receipt of specification to completion of unit-testing (or equivalent acceptance point). Development Effort Calculation requires the following parameters:
LOC Basic Lines Per Day Assignee's Grade Assignee's Experience Assignee's Knowledge Knowledge Required Percentage of Design and Code that could be adapted from existing sources LOC has already been estimated (see above). Basic Lines Per Day is the LOC per day that the installation would expect to achieve for code-and- unit-test as an average for all development staff. Grade is not necessarily a function of jobtitle; it is an indication of where a neutral observer would place the assignee on a scale ranging from Expert downwards. Experience is weighted by reference to a scale ranging from High downwards. The weightings are grade-specific, e.g. a high-experience senior programmer is rated at slightly less than a low-experience expert. Grade and Experience are combined to give an Experience Coefficient. The following table illustrates the relative weightings given to Grade and Experience.
RELATIVE ABILITYBY GRADE& EXPERIENCE i L M High 0Average a LOW Expert Senior Average Junior Assignee's Knowledge is an assessment of how much the assignee knows, not only of the task in question but also of relevant related subjects. It is broken down into a number of descriptors ranging from detailed knowledge of the task and related subjects, to no knowledge of the task and little or no general knowledge of related subjects. Knowledge Required can be characterized, allowing the Assignee's Knowledge to be weighted as to its relevance. This weighting is applied to the Assignee's Knowledge to give a Knowledge Coefficient. The following table illustrates the relative weigbtings given to the shortfall between Assignee's Knowledge and Knowledge Required; the higher the position in the table, the higher the Knowledge Coefficient.
RELA TIVE KNOWLEDGE SHORTFALL Knowledge Required ------ some ------------good ------- ----- detailed ----- Knowledge Available M Detailed 19 Good 0Some Q None Many applications contain tried-and-tested pieces of design and code (for exception-handling, processing transactions against a master-file, and so forth). Percentages of Design and Code that could be adapted Irom other sources allow the economies of re-use to be taken into account (with an allowance for customisation).
The LOC estimate is divided by the basic lines per day to give the basic number of days. This product is then multiplied by the experience and knowledge coefficients to give the estimated development effort in days for this assignee.
RESULTS SPECTRE's final display is in two sections:
RESULTS - SECTION 1 This section gives the actual estimate and comprises the following items:
LOC Lines per day Development effort in days 13- RESULTS - SECTION 2 This section shows a number of additional items which may be of value as well as interest. These are:
Time saving through code/design adaptation Complexity rating Experience rating Knowledge rating Function Point score SPECTRE evaluates all the above in terms of size, range or scope as appropriate, and appends a comment. Any factor that SPECTRE considers to be significantly outside 'normal' expectation is also flagged with an asterisk.
in use, a new software project is broken down into its component tasks (each of which will be performed by a single individual) and each task is analyzed on the basis of its functional content and the relevant local factors, including in particular the grade, experience and knowledge either of the actual person intended to be assigned to the task or of a person to be hired or recruited. The results of the analysis in terms of time and lines of code will enable a manager not only to carry out critical path analysis and other time-sensitive planning tasks but also to decide e.g. that the level of seniority or experience of the person performing the task should be reconsidered, that a particular task should be divided, or that other tasks can be combined. In this way and task-bytask a software development project may be more efficiently planned. Even where a task definition is not fully established, it can be very beneficial to be able to assess notional best, average and worst cases. If a task threatens to exceed reasonable limits of size, complexity or effort or looks as if it will require special expertise, then the earlier these possibilities are flagged the better. The present program is has been made easy to use to encourage iterative estimating, and as previously indicated, HELP TEXT guidelines have been provided for every step of the estimation process.
Figs. 1-6, which are believed to be largely self-explanatory show successive screens in the operation of a practical embodiment of the SCEPTRE software running on a PC under WINDOWS. It will be appreciated that versions of the software can run on other machines, e.g. APPLE machines or machines running UNIX or LINUX. The machine will have the normal computer input and output functions including e.g. a keyboard, a mouse or other pointing device, a CPU, random access memory, a hard disk on which the SPECTRE program will normally be stored and output devices for peripherals e.g. a printer. The machine will also normally have devices for communication with a local area network, a telephone network and the Internet.
In Fig 1 the name of the program is a free-form parameter entered so that it may be displayed 'for the record' on the estimate screen of Fig. 6 which a user may wish to hard-copy for reference or save to a file or database. The program language may be selected from a drop-down menu supplied by the supplier of the SPECTRE program, parameters defining the implications of selecting any particular language being stored and used in the calculation of estimated lines and estimated time. The user is prompted to enter an average number of lines per day that is expected to be written. The program calculates a deviation from this figure, so that the figure initially entered need not be precise. A value in the range 50-70 lines per day is typical of many languages. As previously stated, it is very likely that appreciable amounts of existing program code can be adapted for use within the task, and the user is prompted to enter this as a percentage, The a user is also prompted to quantify the percentage of the program that can be adapted from existing sources with particular regard to standard operations, for example data retrieval or exception handling.
With reference to Fig. 2 the user can choose not to supply developer information, in which case the relevant entry fields will not be made available and the program will not be able to take account of experience or knowledge in its estimate. The user will still be able to obtain a development time and size estimate but based only on a deviation from the average lines per day depending on the functionality and complexity of the task. Even if the identity of the task assignee is not known at the time when SPECTRE is run, it is better to supply developer information reflecting a typical profile for a suitable person because this will generate an advance warning that the task is likely to demand a higher than normal level of expertise or will require more effort than expected. In addition to grade and experience fields, there are four fields relating to available and required knowledge of the current task, the operating system, the installation standards and the installation hardware. If there is a shortfall in any area, this does not mean that the task program cannot be written but that time will be needed to obtain the required knowledge, and this time is included in the estimate finally produced-
With reference to the data access screen of Fig. 3, a database access screen includes an area for database access. In the Inputs/Selects field the user is prompted to enter the count of tables/views from which data is read without being updated. In the Outputs/Inserts field he is prompted to enter the count of tables/views into which data is inserted. In the Updates/Deletes field he is required to enter the count of tables/views whose data is updated or deleted. In the File Access area he is required to enter under Inputs/Reads the count of files (datasets) whose records are reas without update intent, under Outputs/Writes the count of files (datasets) to which records are added and under Updates/Rewrites/Deletes the count of files (datasets whose records are updated or deleted. In the Other 1/0 field under Screens or Displays he is prompted to enter the count of screens or displays that are managed by the task, under Reports the count of reports that the task generates, and under Program Interfaces the count of programs with which the task communicates via parameters.
In the data manipulation screen of Fig. 4, the user is prompted to give a rating for data retrieval, data presentation and data restructuring. Data retrieval relates to data retrieved directly from files and/or databases. If all retrieval if performed for the user e.g. by calls to parameterized middleware functions, then the user should enter Not Applicable. He should select Simple where the data is simple with straightforward relationships and where some editing/conversion is required. He should select Average where there are some interdependencies and a significant amount of editing/conversion is required. He should select Complex where there are multiple sources and/or data types and/or interdependencies and where a significant amount of key-handling and/or editing may be required. In the Data Restructuring field, the user will select Simple where there is little requirement for editing the data, Average where the data has some complexity, a significant amount of editing/conversion is required and there are some different record formats, and Complex where there is a large number of data items and types with a correspondingly large requirement for editing or conversion and numerous record formats. In the Data Presentation field the user will select Simple where there are up to 6 display lines or 25 display/report items and a low number of element types and relationships, Average where there are up to 15 display lines or 60 display/report items, a fair number of element types and relationships and limited user interaction with displays, and Complex where there are more than 15 display lines or more than 60 display/report items, many element types and relationships and extensive user interaction with displays e.g. scrolling/amendment/insertion/deletion.
In the Program Logic screen of Fig. 5 the user is prompted to enter ratings for conditions, calculations and linkages. In the Conditions field, he should enter Simple where there is straight-through control flow and a low proportion of branching logic, average when there is some non-linear logic consisting mainly of IF/THEN/ELSE or CASE constructs and Complex where processing is heavily conditional, logic is affected by timing and/or resource-usage constraints or several logic levels. Calculations should be rated as Simple where they involve mostly addition and subtraction (e.g. spreadsheet-type with column and/or row totals) or somple multiplication e.g compound interest calculation, average where IF/THEN/ELSE or equivalents are used or there are straightforward iterative or statistical operations and Complex where the operation is computation-heavy or includes recursion, non-linear calculations, calculus or the like. Linkages are classified as Simple where there are few calls to other (sub)programs and simple parameters, average where there are numerous calls and some parameter conversion/interpretation and complex where there are numerous calls and/or parameter handling requires significant effort.
The Other Factors screen of Fig. 6 requires ratings to be entered for sorting and merging, test conditions and performance criteria. In the Sorting and Merging field, a Simple rating should be entered where the data is simple with little requirement for editing or conversion and key data is readily extrapolated, average is where key data derivation requires editing or conversion and Complex where there are special exits and/or substantial record selection and/or file merging. Test conditions should be rated Simple where a test setup is straightforward or already in place, logic paths are uncomplicated and good debugging facilities are available, Average if noticeable test setup effort is required and/or there are numerous logic paths of moderate complexity with some special cases and where debugging facilities are fairly useful and Complex where significant test setup effort is required, there are numerous logic paths, some of high complexity, laborintensive tracking is required and debugging facilities are of limited usefulness. Performance criteria should be rated as simple where there are no requirements beyond normal 'efficient' programming and design, Average where there are some constraints on the use of memory and/or media and/or where response times or speed of execution are important but achievable using normal methods of development and Complex where response times or speed of execution are crucial and a dominant design requirement and/or where there is a strict restart/recovery protocol.
ACCURACY The following table shows how SPECTRE performed with a set of test cases. The 7 th and 8th programs in the table were specifically selected to test the ability to handle tasks at both ends of the size spectrum. The 0' and 1 Oth programs were also used to test Development Effort estimation, where SPECTRE predicted 22 days for both against actual times of 23 and 25 days.
LOC - Estimated LOC - Actual Deviation % 1287 1338 -3.9 4715 4848 -2.8 5684 5695 -4.7 1560 1551 +0.6 11280 10776 +4.6 5520 5963 -7.5 540 634 14.8 11424 12590 -10.0 2277 2342 -2.8 4175 4080 +2.3 The present program may be supplied on a magnetic or optical disc or as a signal containing one or more digital files for down-loading via a local network or via the Intemet. The invention also includes the program recorded or down-
loadable as aforesaid.

Claims (15)

1. A method for calculating an expected time for writing software for performing a task, said method comprising under the control of a program stored in a data processor: entering into said data processor data that defines the functional content of the task; entering into said data processor data that defines local factors relating to the carrying out of the task; calculating by means of said data processor from the functional content data and the local factors data the number of lines of code that the software is expected to contain; calculating by means of said data processor from local factors data and the expected number of lines of code the expected time to write the software; and providing an output indicating at least the expected time.
2. The method of claim 1, wherein the functional content data entered includes database access data, file access data and input/output data.
3. The method of claim 1 or 2, wherein the functional content data entered includes data relating to the complexity of data retrieval data restructuring and data presentation.
4. The method of any of claims 1-3, wherein the functional content data entered includes data relating to the complexity of conditions, calculations and linkages.
5. The method of any preceding claim, wherein the functional content data entered includes data relating to the complexity of sorting and merging, test conditions and performance criteria.
6. The method of any preceding claim, wherein the local factors data includes the language in which the program will be written.
7. The method of any preceding claim, wherein the local factors data includes 5 an average number of lines per day to be written.
8. The method of any preceding claim, wherein the local factors data includes a proportion of the program's code that can be adapted from other sources.
9. The method of any preceding claim, wherein the local factors data includes the percentage of the program's design that can be adapted from other sources.
10. The method of any preceding claim, wherein the local factors data includes data representing the seniority and/or experience of the developer.
11. The method of any preceding claim, wherein the local factors data includes data representing the knowledge of the developer.
12. The method of any preceding claim, wherein the output includes the 20 number of lines estimated.
13. Data processing apparatus containing stored instructions for carrying out the method of any preceding claim.
14. An optical or magnetic disc or signal storing instructions for carrying out the method of any of claims 1-12.
15. Use of function point analysis to derive the number of lines of code for carrying out a predetermined task in a particular language, and thence an expected 30 time to write the code.
GB0011728A 2000-05-16 2000-05-16 Method and apparatus for calculating the magnitude of a task Withdrawn GB2362483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0011728A GB2362483A (en) 2000-05-16 2000-05-16 Method and apparatus for calculating the magnitude of a task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0011728A GB2362483A (en) 2000-05-16 2000-05-16 Method and apparatus for calculating the magnitude of a task

Publications (2)

Publication Number Publication Date
GB0011728D0 GB0011728D0 (en) 2000-07-05
GB2362483A true GB2362483A (en) 2001-11-21

Family

ID=9891652

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0011728A Withdrawn GB2362483A (en) 2000-05-16 2000-05-16 Method and apparatus for calculating the magnitude of a task

Country Status (1)

Country Link
GB (1) GB2362483A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004090721A1 (en) * 2003-04-10 2004-10-21 Charismatek Software Metrics Pty Ltd Automatic sizing of software functionality
AU2004227429B2 (en) * 2003-04-10 2009-05-28 Charismatek Software Metrics Pty Ltd Automatic sizing of software functionality

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
http://www.eLabor.com/products/project.htm, eLabor.com Enterprise Project (March 2000) *
SPR KnowledgePLAN 3 (1998), http://www.artemis.com/kpnewversion.pdf (product factsheet) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004090721A1 (en) * 2003-04-10 2004-10-21 Charismatek Software Metrics Pty Ltd Automatic sizing of software functionality
AU2004227429B2 (en) * 2003-04-10 2009-05-28 Charismatek Software Metrics Pty Ltd Automatic sizing of software functionality
AU2004227429C1 (en) * 2003-04-10 2009-10-29 Charismatek Software Metrics Pty Ltd Automatic sizing of software functionality

Also Published As

Publication number Publication date
GB0011728D0 (en) 2000-07-05

Similar Documents

Publication Publication Date Title
US5630127A (en) Program storage device and computer program product for managing an event driven management information system with rule-based application structure stored in a relational database
US7966266B2 (en) Methods and systems for cost estimation based on templates
US8195525B2 (en) Method and apparatus upgrade assistance using critical historical product information
US7769843B2 (en) Apparatus and method for capacity planning for data center server consolidation and workload reassignment
US7673340B1 (en) System and method for analyzing system user behavior
US5848393A (en) &#34;What if . . . &#34; function for simulating operations within a task workflow management system
CA2453863C (en) Database navigation
US20030033586A1 (en) Automated system and method for software application quantification
US7475062B2 (en) Apparatus and method for selecting a subset of report templates based on specified criteria
CA2417765A1 (en) Budget planning
WO2002077753A3 (en) Automated transaction management system and method
US20120290543A1 (en) Accounting for process data quality in process analysis
CN110807016B (en) A method, device and electronic device for constructing a data warehouse for financial services
US7353212B1 (en) Method and structure for assigning a transaction cost
EP3724788A1 (en) Enterprise data services cockpit
US20210103862A1 (en) Methods and apparatus for exposing workflow process definitions as business objects
GB2362483A (en) Method and apparatus for calculating the magnitude of a task
Paynter Project estimation using screenflow engineering
Rob Software size estimation: Practical models and their applications in various phases of the SDLC
Burd et al. Decision support for supercomputer acquisition
Moribayashi et al. A decision support system for capital budgeting and allocation
Westland CASE in Business and Administrative Information
Norton Don't Predict Applications When You Should Model the Business
Mandke Research in information integrity: A survey and analysis
Krallmann et al. Bonapart

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)