Measuring Privacy
Tracy Ann Kosa1, Khalil EI-Khatib2,
and Stephen Marsh3
1Faculty of Science,
University of Ontario Institute of Technology,
2000 Simcoe Street, Oshawa,
Ontario, L1H7K4, Canada
TracyAnn.Kosa@uoit.ca
2University of Ontario Institute of Technology,
Oshawa, Canada
Khalil.El-Khatib@uoit.ca
3Communications Research Centre Canada
Ottawa, Canada
steve.marsh@crc.gc.ca
Abstract
There is no unified theory of privacy. Law, political
science, economics, sociology and psychology
have thoroughly explored the concepts of privacy, while
computer science has attempted to apply
these concepts with varying degrees of success. The study
of privacy is often lost in a debate over
values, whether privacy itself is a good thing or a bad
thing, and how / when it may be reasonably
invaded. This paper ignores that debate, reasoning that privacy
is legislated so the values issue is no
longer relevant, and proposes a theoretical mechanism for
measuring privacy using trust as a model
based on the need (briefly examined in Section 3) that
knowledge about an individuals state
of privacy is necessary. Presenting 3 different sets of factors (human,
computer and data) derived from
multiple disciplines, this work identifies the list of
considerations from which a state of privacy may
be derived in any given situation; physical or virtual
world. This work proposes an original model
of the states of privacy based on the identifiability of
an individual. Representation is a finite state
machine, while the same list of factors can be used to
calculate transitions in the machine.
Keywords: Privacy, Trust, Finite State Machine
Journal
of Internet Services and Information Security (JISIS), 1(4): 60-73, November 2011 [pdf]