V+Ü Empirical Evaluation in Informatics (Empirische Bewertung in der Informatik) SS 2010

This is the homepage of the lecture (Vorlesung) "Empirische Bewertung in der Informatik" (Empirical Evaluation in Informatics) and its corresponding tutorial (Übung). Both parts will be held in English language this year.

Description

As an engineering discipline, Informatics is constantly developing new artifacts such as methods, languages/notations or concrete software systems. In most cases, the functional efficiency and effectiveness of these solutions for the intended purpose is not obvious -- especially not in comparison to other already existing solutions for the same or similar purpose.

For this reason, methods for evaluating the efficacy of these solutions must be a routine part of Informatics -- a fact which unfortunately only slowly has become recognized. Evaluation is needed by those who create new solutions (that is in research and development), but also by the users, as these need to evaluate the expected efficacy specifically for their situation. These evaluations need to be empirical (that is based on observation), because the problems are nearly always too complicated for an analytical (that is a purely thought-based) approach.

This lecture presents the most important empirical evaluation methods and explains where these have been used (using examples) and should be used, how to use them and what to consider when doing so.


Administration

Lecturers

Requirements/target group, classification, credit points etc.

see entry in the KVV course syllabus

Registration

Mailing-list: All participants need to be member of the mailing-list SE_V_EMPIR. (Please enter both your given name and family name.) Via this list important information and announcements will be sent. Please sign in individually.

KVV (course syllabus): All participants need to have registered in the KVV.

Dates

Examination modalities

Necessary criteria for obtaining the credit points:


Content

Some of the linked documents can only be accessed from the FU net (externally you receive a 403/Forbidden: "You don't have permission to access ...").

Attention: The practice sheets are now to be found in a separate section Practice Sheets.

Lecture topics

The lecture divides into three sections:

The individual lectures:
  1. Introduction - The role of empiricism:
    • Term "empirical evaluation"; theory, construction, empiricism; status of empiricism in Informatics
    • Hypothetical examples of use
    • quality criteria: reliability, relevance
    • Note: scale types
  2. The scientific method:
    • Science and methods for gaining insights; classification of Informatics
    • The scientific method; variables, hypotheses, control; internal and external validity; validity, reliability, relevance
  3. How to lie with statistics:
    • When looking at somebody else's conclusions from data: What is actually meant? What specifically? How can they know it? What is not said?
    • Does the measurement distort the meaning? Is the sample biased?, etc.
    • Material: book on the topic; Study on alternative ink; article with arguments against hypothesis testing: "The earth is round (p < 0.05)".

  4. Empirical approach:
    • steps: formulate aim and question; select method and design study; create study situation; collect data; evaluate findings; interpret results.
    • example: N-version programming (article, reply to the criticisms against it)
  5. Survey:
    • example: relevance of different topics in Informatics education (article)
    • method: selection of aims; selection of group to be interviewed; design and validation of the questionnaire; execution of the survey; evaluation; interpretation
  6. Controlled experiment:
    • example 1: flow charts versus pseudo-code (article, criticized prior work)
    • method: control and constancy; problems with reaching constancy; techniques for reaching constancy
    • example 2: use of design pattern documentation (article)
  7. Quasi experiment:
    • example 1: comparison of 7 programming languages (article, detailed technical report)
    • method: like controlled experiment, but with incomplete control (mostly: no randomization)
    • example 2: influence of work place conditions on productivity (article)
  8. Benchmarking:
    • example 1: SPEC CPU2000 (article)
    • Benchmark = measurement + task + comparison; problems (costs, task selection, overfitting); quality characteristics (accessibility, effort, clarity, portability, scalability, relevance) (article)
    • example 2: TREC (article)

  9. Data analysis - basic terminology:
  10. Data analysis - techniques:
    • Samples and populations; average value; variability; comparison of samples: significance test, confidence interval; bootstrap; relations between variables: plots, linear models, correlations, local models (loess)
    • Article: "A tour through the visualization zoo"

  11. Case study:
    • example 1: Familiarization with a software team (article)
    • method: characteristics of case studies; what is the 'case'?; use of many data types; triangulation; validity dimensions
    • example 2: An unconventional methods for für requirements inspections (article)
  12. Other methods:
    • The method landscape; simulation; software archeology (studies on the basis of existing data); literature study;
    • example simulation: scaling of P2P file sharing (article)
    • example software archeology: code decline (article)
    • example literature study: a model of the effectiveness of reviews (article)

  13. Summary and advice:
    • Role of empiricism; quality criteria; generic method; advantages and disadvantages of the methods; practical advice (for data analysis; for conclusion-drawing; for final presentation); outlook

Aims of the tutorials

Practice sheets

(These links will be added continuously as the course proceeds.)

Groups

Group Number Members Final Presentation
1
  • Phillipp Gaertner
  • Eike Starkmann
  • Stephanie Schultz
else
2
  • Martinus Dipobagio
  • Franz Zieris
ppt
3
  • Christian Behnert
  • Phillipp Kalmbach
  • Matthias Niemann
else
4
  • Hendrik Degener
  • Lisa Dohrmann
  • Ahmad Haidar
pdf
5
  • Armin Feistenauer
  • Patrick Remmler
  • Thomas Sobik
else
6
  • Oliver Gesch
  • Miriam Ney
ppt
7
  • Sebastian Barthel
  • Matthias Bohnstedt
 
8
  • Selimkhan Achmerzaev
  • Bernhard Kau
  • Christoph Sackl
ppt

Changes over the years

Literature


(Comments)

Should you have comments or suggestions concerning this page, you may add them here (possibly with date and name):

"wer zuerst kommt, mahlt zuerst" ist natürlich ein geniales verfahren für die zuteilung der mailinglisten! der erste trägt sich mit all@fu-berlin.de (ich weiß, die gibts nicht) ein und da alle anderen mailinglisten nur teilmengen dieser sind, können die anderen sich an den daumen spielen, oder wie? -- DennisHartrampf - 14 Jun 2009

Nein. Wenn mehrere Gruppen berechtigtes Interesse an denselben Verteilern haben, lösen wir diesen Konflikt mündlich in der Übungsstunde (wie geschehen). Niemand wird mit den Daumen spielen müssen! -- MartinGruhn - 18 Jun 2009