Vol. 13 - Issue 1 2017 - ISSN 1504-4831
Thursday, 18 April 2024

Volume 2 - issue 2 - 2006

When means become ends: technology producing values

Bjørn Hofmann explores in this article the connection between technology and values. Technology has turned out to be the symbol of our culture, and has developed to be a goal in itself. The author emphasizes our responsibility even if technology is uncontrollable and not easily can be removed when it first has been invented. By building up a technological axiology Hofmann argues for the importance of being responsible to avoid a technological imperative. Bjørn Hofmann is adjunct professor at the University of Oslo and at the University College of Gjøvik.
 
 

Bjørn Hofmann

Adjunct Professor
Section for Medical Ethics, Faculty of Medicine
University of Oslo

Institute for Health Technology
University College of Gjøvik, Norway
Email: This email address is being protected from spambots. You need JavaScript enabled to view it.
 
 
 

Abstract

Technology has become the symbol of our culture.  The claim that we are subject to a technological imperative is therefore a fundamental cultural critique: we do not control technology, rather technology controls us. An alternative way to formulate this is to claim that technology cannot be “made down” when it is made up; we just have to make the best of it. Accordingly, it has been argued that technology has evolved from being merely a means to becoming an end in itself. This article investigates this claim by analyzing the relationship between technology and values. The examples stem from the technologies of medicine and weapons because they clarify this relationship. It is argued that technology relates to values in two ways. Technology both raises general questions about values and it is value-laden due to its very function. However, although technology is value-laden, it does not necessarily give an imperative mandate. One reason for this lies in our responsibility. We are inevitably responsible for all aspects of technology, i.e. development, construction, production, commercialization, implementation, and use. Referring to a technological imperative to explain and defend our decisions with respect to technology constitutes an unjustified renunciation of our own responsibility. Hence, the article tries to underscore our responsibility by developing a technological axiology.
 

Introduction

A core claim about technology is that it has changed the relationship between means and end. While technology traditionally is thought of as a means to human ends, it has actually has become an end in itself. One could even argue that it has become an end to which we adapt our lives - a gauge for human life (ars mensura).

We can recognize this in ordinary language where we can encounter claims such as "technology generates demands," "technology seeks problems to its solutions," and more specifically "drugs looking for diseases" (Vos 1991). Within communication technology, "Integrated Services Digital Network" (ISDN) was established to integrate speech and digital information, e.g. from personal computers. This service received little interest for many years, and it was only the explosive use of the Internet and net browsers that fuelled the demand for ISDN. Similarly, "electronic information highways" were launched and developed without any particular use in mind, i.e. no external end. The task afterwards lay in finding uses for the technology.

There are also several examples of technologies in the field of medicine that had no clear application: nuclear magnetic resonance, impedance analysis, optical spectral analysis. Within all these technologies, millions of dollars have been invested to find medical applications, such as MRI, impedance imaging, and optical tissue characterization.  Although only the first of these is an example of a successful medical technology, all are examples of technologies looking for applications. 1

In a now classic book, Langdon Winner has described this swap between means and end as a reverse adaptation: human ends are adapted to the characteristics of the available means (Winner 1977: 229).
 
The goals, purposes, needs, and decisions that are supposed to determine what technologies are in important instances no longer the true source of their direction. Technical systems become severed from the ends originally set for them and, in effect, reprogram themselves and their environments to suit the special conditions of their own operation. The artificial slave gradually subverts the rule of its master (Winner 1977: 227). 

Hence, although technology was meant to free the human being from being a slave of nature, one can argue that we are subject to a new technological enslavement. While technology was supposed to increase our self-determination and choice, instead it seems to have reduced our autonomy. Technology appears to promote a subtle but extensive change in our thinking and motivation. Efficiency, expedience, measurability, rationality, quantification, productivity, and technical improvement have traditionally been guided by quite different considerations, namely qualitative and value related aims (Winner 1977: 229). 2

According to Winner, this reverse adaptation and the corresponding reduction of human autonomy creates the impression that we are subject to a technological autonomy. Winner is supported by others claiming that there are technological values external to human values that enforce our actions: There is a technological imperative (Wolf & Berle 1981; Tymstra 1989), technology is rampant, perpetuating, self-augmenting (Cassell 1993), autonomous (Ellull 1964; Winner 1977), and there is a belief that technology can overcome all human challenges, i.e. a "technological fix" (Callahan 1996; Shackleford 2006). 3

The term "technological determinism" has been used as an umbrella term for theories conceiving of technology as influential on human modes of action (Sejersted 1998; Smith and Marx 1994). Although strong versions of "technological determinism" having a view of technology as something incomprehensible, independent and autonomous that reduces human self-determination, may be prevalent in the general population, they appear to be rather unattractive to scholars. It can be argued that if we really are governed by technology, it would be better for us to be ignorant of its dominance. Moreover, strong versions of technology determinism are obliged to explain how technology which is developed, produced, commercialized, bought, implemented, used, and disposed of by human beings can control those same humans. The difficulties in explaining this have resulted in various interpretations and nuanced versions of technological determinism. 4

This is not the place to elaborate on the peculiarities of technological determinisms, but only to claim that there are some challenges related to many of them. If we really are determined by technology in one way or another, it must mean that we have less responsibility for technology. We can only be held responsible for acts and situations we can actually do something about, i.e. "ought" implies "can" (Tranøy, 1972; 1975). If we are controlled or coerced, the responsibility for our actions is diminished.

Moreover, it appears to be difficult to find the technological determinant that can diminish our responsibility for it. It seems to be hard to free ourselves from the responsibility for its development, production, commercialization, implementation, and use. The point is that the disclaimer of responsibility for technology may challenge profound structures and values in modern democracies. One reason for this is that the reversed adaptation and notion of technological imperative becomes a self-fulfilling prophecy.

If we believe that technology determines our choices and that we are therefore not responsible but only act in accordance with a technological imperative, we let technology decide for us in the sense that we renounce responsibility for our actions. This means that the belief in technology, i.e. making means become ends, results in a situation with a self-enhancing use of technology which can be interpreted as a technological determinism. Technology appears to be imperative. This is, however, a dangerous illusion, as it legitimizes the production and implementation of technology without assessment. 5 Indeed, it appears to be difficult to argue that we who develop, produce, implement, and use technology should not be responsible for it. The point is that our seduction by technology is one of its most dangerous aspects.

How then is it possible that we feel controlled by technology when this is seemingly impossible? How can the common conception of technological determinism be reconciled with the scholarly rejection of determinism?

I will try to address these questions by investigating the relationship between technology and values. 6 Technology appears to relate to values in at least two profoundly different ways. Firstly, technology can raise general value issues. Secondly, technology is constituted by its end. Its teleological nature makes it value-laden. Hence, technology both makes values into a current issue and is value-laden by raising ethical questions and promoting values. 7

Technology raises ethical issues

Technology confronts us with a series of ethical issues: is it right to clone 8 human beings? Should prenatal screening be allowed on demand? These are general value issues not related to technology as such. Modern gene technology can be applied to many purposes other than reproductive cloning of human beings. Similarly, diagnostic ultrasound technology can be used for purposes other than detecting deficiencies or diseases in fetuses before the gestational limit for legal abortion. Whether it is right to "produce genetically identical human beings" or to select fetuses with respect to their characteristics (other than severe disease) are ethical issues existing independent of gene technology and diagnostic ultrasound technology. Technologies only raise these value issues and make them more apparent.

Just because we allow gene technology that can isolate, characterize, and modify DNA, does not mean that we have to allow reproductive cloning. The use of a diagnostic ultrasound does not necessarily entail prenatal selection for all possible conditions or characteristics. 9 However, before technology made it possible to "produce genetically identical human beings" and perform selective termination of pregnancies we were not forced to decide on these kinds of ethical issues. Technology highlights questions concerning values and makes them topical. This means that technology's general potential renders certain ethical issues current even though they are not specifically related to a particular technology.
 

Technology is value-laden

What about bacterial weapons and respirators? Can they be used for many purposes and do they therefore raise general ethical issues as well? It is hard to find purposes for bacterial weapons other than hurting people by making them sick. By the same token, respirators maintain artificial respiration and can only be used for this purpose (as respirators).

Hence, bacterial weapons and respirators do not raise more general questions of values, but they promote particular values, i.e. that it is good to hurt people (defined as enemies) by making them sick, and that it is good to artificially maintain respiration. These value issues are related to technology in a basic way and are not only raised by it. If we accept and are prepared to use respirators, then we cannot renounce artificial respiration. Conversely, it is difficult to use bacterial weapons (as bacterial weapons) without making people sick. It would contradict our values.  10

What then is the difference between genetics and bacterial weapons, between diagnostic ultrasound and respirators? Both appear to be technologies that can be applied for good and for bad. The point is that genetics and diagnostic ultrasound are examples of general technologies that can be applied for many purposes, and not only for cloning human beings and for selective abortion. Hence, more general ethical issues are involved with these technologies: is it good or bad to isolate, characterize or modify DNA and is it good or bad to produce an image of intracorporeal anatomical structure by means of ultrasound reflections? Bacterial weapons and respirators, however, do not involve more general questions of values. They can only be used for hurting people and for artificially maintaining respiration.

The point is that bacterial weapons and respirators have been described by their inherent function.  By contrast, cloning and ultrasound screening of fetuses before the gestational limit for legal abortion are only one of many applications of genetics and the diagnostic ultrasound, respectively.  11

Hence, my objective is not to generate a certain typology of technology or to differentiate good technology from bad. Instead, it is to point to something general about all kinds of technology: Technology raises questions of values which are general because they are not specifically related to any particular technology. Other technologies may raise such questions as well. There may be other technologies for cloning humans or other methods of facilitating selective abortion of which we are not aware. However, there are some ethical issues that are specific to a given technology and that cannot be separated from that technology. Such ethical issues related are to function. Technology thus raises general questions of values, and is generally value-laden through its inherent function. Every technology has a function, and every function is related to a purpose and a value.

As described above, there appears to be a difference between the many functions of an ultrasound machine and the function of a respirator. A (diagnostic) ultrasound machine can be used for many purposes: it can be used for diagnosing cancer, for examining joints, for screening of pregnant women, and for guidance and orientation during surgery. Accordingly, one might group such purposes in even more general categories, such as diagnosis of diseases, screening, and assistance during treatment. Furthermore, one might continue such a generalization, resulting in an overarching purpose of an ultrasound machine without which it would not be an ultrasound machine, such as for example to produce images of intracorporeal structures by means of variation in ultrasound reflection in tissue.

This makes the function of a technology its most general defining characteristic. Hence, the ethical issues related to the many particular purposes have to be addressed in relation to those purposes. Whether it is good or bad to carry out cancer diagnostics is not an ethical issue that is specifically linked to ultrasound technology. Again, many other technologies might do the same. In the same vein, the accumulation of diagnostic knowledge through the screening of pregnant women involves ethical issues which are not specifically related to ultrasound technology. However, the ethical issues related to the ultimate and most general purpose of the ultrasound machine cannot be separated from the technology in question.
Different characteristics of various goals of technology are outlined in the table below. The point is that only the most basic level concerns values which are actually related to technology as such, whereas the subsequent ends are ascribed to technology due to external values.
 

 Teleological level
Overall level
Particular level: e.g ultrasound
 Function To look into the body To produce an image of intracorporeal anatomical structure by means ultrasound reflections
 Purpose To gain knowledge  To recognize conditions, to diagnose
 Intention To obtain choice of action  To make prognosis, treat, prepare for emergencies, (sex) selection
 Intention´ To make progress  Ultrasound device as a symbol of progress: the NEW

 

The paradox of value-ladenness

This leads to an apparent paradox. On the one hand, technology is fundamentally value-laden because of its function. On the other hand, the ethical issues that are often thought of as associated with particular technologies, such as cloning and diagnostic ultrasound are actually only made topical by the technologies. This means that of all ethical issues that are raised by technology, only some of them are "genuinely technological."

Let us now return to the original question concerning why people feel that they are steered by technology, even though the justification for this impression is so difficult to give.  Also, how can technological means have become ends in and of themselves?

The point is that technology is a complex phenomenon. We cannot relate to it in only one particular way. Technology can be conceived of as both free of values and value-laden. Many ethical issues are not related to technology, except for being raised by technology. However, some ethical issues are clearly related to a particular technology due to its characteristics, most especially its function. Therefore, it becomes important to differentiate between when technology is promotes values12 and when it raises general value issues. This is important because if means become ends in what Winner characterized as "reverse adaptation" to an "autonomous technology," it is particularly important to be explicit what kind of technological end that becomes valuable.

If we do not differentiate between values raised by technology and values inherent in technology, we fall subject to several fallacies.
 

The fallacy of the value neutral stance

The fact that technology raises questions of values which are independent of or at least external to the technology in question has caused many to argue that technology is value neutral, i.e. technology is strictly instrumental. Although the value neutral stance appears to be popular, few scholars have found it defensible. One exception is the Swedish physician and philosopher Per Sundström (Sundström 1998).

According to the value neutral stance, technology is value neutral because a) we can choose whether we will develop, produce, or use technology or not, b) because we can use it for different purposes, and c) because technology is based on science, which is value neutral. The problem is that there are situations (related to a technology's function) when we cannot choose whether to use a certain technology or not, e.g. if a dying child could be saved by using an available respirator. Moreover, even if science were value neutral, it is its application that relates technology to values, and this can hardly be annulled by its scientific basis (Hofmann 2002a).

The point is that the value neutrality stance appears to ignore that technology inherently has an end, because of its function. Disregarding the function of technology is to ignore one of its inherent characteristics. One cannot create respirators independent of the goal of artificially maintaining respiration (without them not being respirators).

When we create technology, we simultaneously make choices about values. If we produce respirators and bacteriological weapons, then this implies that being able to sustain artificial respiration and to subdue people by making them sick are worthwhile goals. Ignoring such an implication can lead to technology establishing values in a covert manner. The introduction of technology can be a promotion of underlying values. In this manner, technology can produce values and the value neutrality stance can result in a "technological imperative." To believe that there is a value-free zone in dealing with technology can be dangerous.

The fallacy of technological imperative

A second fallacy is to believe that all issues raised by technology are related to technology as such. To believe that the question of whether it is right or wrong to clone human beings is a question of whether gene technology is good or bad is to make technology more value-laden than is justifiable. Similarly, it appears fallacious to boil down all questions about whether we should allow fetus selection to the issue of ultrasound technology, or to equate issues of involuntary infertility with questions of artificial reproduction technology (ART). This would be to reduce ethical issues to questions of technology, and to make technology "value-imperative." As argued before, cloning and selective abortion are general value issues that are simply raised by particular kinds of technology.

The danger with assigning an ethical dimension to technology is that we may be tempted to reduce all ethical issues to technological matters, nourishing the belief that ethical issues can be solved by technology, the so-called technological fix. To implement ART involves an ethical question, but the intrinsic ethical issue concerns whether infertility is a bad thing which should be remedied, rather than any particular method of in vitro fertilization (IVF).
Equating technology with ethical issues is problematic because it makes it a normative agent. In literature written about technology, it has been described as a Frankenstein's monster (Winner 1977) and as the sorcerers broom (Cassell 1993). Although it may be difficult to identify such a technological actor or subject, the ascription of values to technology makes it a normative actor, which revives the monster metaphor. It also can explain people's notion of being powerless and steered by technology.

The danger with the fallacy of technological imperative is that the ascribing of values and characteristics to technology above and beyond its function makes us subject ourselves to an unjustified imperative. The belief in a technological imperative represents a renunciation of responsibility which may lead to irresponsible actions, which may in turn reinforce the belief (Hofmann 2002b).

The technological imperative leads to reduced self-determination. Reduced autonomy also entails less responsibility. Despite the general feeling of reduced control, less responsibility for technology is difficult to defend. We are responsible for our actions when we develop, commercialize, buy, implement, and use technology. Hence, it is hard to see how our autonomy and our responsibility with respect to technology could be reduced. 13

A challenging case: on technology pragmatics

Let us now apply the results from this analysis to a particular case. So far I have used examples from medicine and weapon technologies. This choice is not accidental, of course. Both areas exemplify the relationship between technology and values in an illustrative manner and both have been central in debates about values.

The debate on prenatal diagnosis, and particular ultrasound screening, within the gestational limit for legal abortion is a good example of technology's impact on values, as described above. A key claim in the debate was that one should not allow the application of ultrasound technology to screening, because it would result in sex selection or selective abortion for minor defects (Fosterdiagnostikk 2000). It has been argued that if the technology was implemented, one would not be able to restrict or abolish its use. In short, once a technology is implemented, we have to accept it. 14

If this is correct, it means that although we can develop, produce, and commercialize technology as independent human beings, we loose our autonomy once that technology is implemented. As previously indicated, the challenge with such a claim is to explain exactly how technology reduces our autonomy at its moment of introduction.

Indeed, experience from many countries indicates that when this (ultrasound) technology is first implemented,, it tends to be applied to different purposes than those originally conceived of (as well as discussed and decided for). Once technology has been applied in research for instance, it will be also used clinically, almost independently of the results with respect to efficiency (Eiring and Vollebæk 2001). Should we ignore this? Would it not be wise to apply such empirical knowledge in a pragmatic fashion? Should we not proceed cautiously and avoid sliding down "the slippery slope"?

Technology is a practical matter, and it would be wise to relate to it pragmatically. The problem is to justify taking measures against technology because we do not control it. How can we be autonomous when we restrict the development and implementation of technology (by being cautious), but not its use?

Following from the above analysis, it would be a fallacy both to believe that technology is either value-laden or neutral with respect to values. Even after having been implemented, there is nothing about technology that makes it impossible to decide not to use it, if we really want to. Part of the problem in our relationship with technology appears to be our ambivalence: we want technology because it promotes our interests (through its function), while we do not like its side effects: nobody wants pollution, but everybody wants the car.

At the same time, the ultrasound example illustrates how important it is to differentiate the aspects in which technology is related to ethics. If we would not only ask whether it is right to use ultrasound technology to diagnose fetuses within the first trimester, but also whether it is right to make images of intracorporeal structures by using ultrasound reflection, it would not only raise general ethical issues, but also raise questions about the technology as such. In the latter case it is not possible to differentiate between technology and values.

If we conceive of (diagnostic) ultrasound as good, we cannot simultaneously maintain that it is bad to produce images of intracorporeal structures by means of ultrasound reflections in tissue. Here we are dealing with the description of the function of technology, and, hence with the extent to which technology is linked to values. It therefore becomes essential to differentiate those cases where technology may be equated with ethical issues from those situations where technology merely raises ethical issues. In both cases, it is important to focus on the questions of values (and not only on the technology itself).

At the same time, the example shows that technology gives us choices we were not previously aware of and that we were perhaps not prepared for. Imagine that a routine prenatal check-up reveals that that there is a certain probability of a child having Down syndrome (or another condition).  Another test must be undergone to learn more, but that test involves a risk of miscarriage, whether or not the fetus actually has the condition in question. What does one choose to do? This shows how important it is to clarify the ethical issues raised by technology, so that we are not forced to make impetuous decisions that appear to be compelled by technology. It seems that we tend to take advantage of technological possibilities because we are afraid that we will otherwise regret not doing so (Tymstra 1989). This indicates that we are not mature enough to make the ethical decisions that the advent of technology makes possible, and also shows the importance of perpetuating a debate about ethics in our modern society where technology is so prevalent.

Another pragmatic approach would be to argue that, since we are not prepared to tackle the ethical issues raised by technology, we should proceed cautiously by restricting the development and implementation of technology. However, it is important to notice that the precautionary principle only argues that ignorance is not an argument for technology. 15  Not knowing the consequences of a given technology cannot be used as an argument for implementing it. Similarly, we could argue that the lack of control is not an argument for implementing technology.

Technology's ends and our values

If it is true that our means become our ends either because we do not address important issues raised by technology (value-neutrality fallacy) or because we make all ethical issues into technological issues ("value-ladenness fallacy), we should be concerned with how technological means become human ends (reverse adaptation).

Although this is not the proper place to debate the different kinds of values and their internal relationship, something should be said in order to clarify what is meant when means become ends. First, making means become ends can signify that technology's (functional) value becomes something which is valued. Here technology is of instrumental value. We come to use and value technology because it gives us pleasure or other (intrinsic) values.

A much stronger version of technology having become an end is if it its ends actually become our intrinsic values. How can technology's instrumental values (towards intrinsic values) become intrinsic? Cell phones and audio technology appear to be examples of this: cell phones appear not merely to be used in order to communicate, but to obtain some other value. Audio enthusiasts tend not to buy extremely expensive music machines in order to listen to Wagner or Wolfsmother, but rather for the technology itself.

However, most versions of "reverse adaptation," making means into ends, are less extreme than these. Our values are more influenced by technology than a simple appreciation of  technology's instrumental values, but less important than having made the instrumental value of technology (i.e. its function) into a value in and of itself (intrinsic value).

The inbetween situations, where technology appears to represent both instrumental and extrinsic values, may be difficult to handle. 16  Nevertheless, it appears to be clear that technology has more than only instrumental value when we a) ignore side effects, b) when we use technology because of its symbolical value (e.g. power and freedom), and c) when we adapt to technology in order to obtain its (instrumental) values. 17

Hence, means may become ends in a variety of ways, and technology's values may be diverse. The values making technology an end may be values related to its function (instrumental values). Nevertheless, there may also be other values attributed to technology. Only in a few cases will technology's end become an intrinsic value. If technology does not have an intrinsic value, it will become hard to renounce responsibility for its implementation and use due to its extrinsic (e.g. its instrumental) values.

Technology changes us - but not our responsibility

It is clear that technology changes us. It does not only change our environment and our actions and activities, but also our thoughts and ideas. Ultrasound technology changed our ideas about the status of the fetus. Similarly, a series of machines and tools, for instance the personal computer, has changed our conception of work. Heidegger's theory of technology tries to make sense of the complex and profound role of technology in being human (Heidegger 1953; Dreyfus 1997). Technology is part of forming us as human beings. Man develops through the technology he creates. In this way technology is liberating. At the same time, technology sets up a framework for our idea of self, it is our perspective on the world which we cannot escape. According to the latter perspective, man is enframed (Gestell). Hence, technology is also restrictive. However, this technology's enframing is something different than a technological imperative. It is a restriction which we ourselves have created, and which we can change (e.g. by changed use of technology or by creating new and different technologies).

Technology tends to change us, either in a liberating or confining manner. Nevertheless, it does not free us from our actions with or without technology.

Technology: value productive, but not imperative

Technology is value active in two different ways: it raises issues of values and it promotes them (i.e. is value-laden). This explains why technology appears to many as controlling and governing, even though we ourselves develop, produce, commercialize, buy, implement, and use it. We tend to think that technology is value-laden as such, and that it promotes its own values, contrary to ours. Therefore, it appears to be important to acknowledge that technology's value-ladenness is limited to its function. All other values are human values attributed to it.

On the other hand, we tend to believe that technology is a value-neutral means for an external end. Correspondingly, it appears to be important to recognize that technology raises a series of general ethical issues, but that only a few ethical issues are related to the technology qua technology. Therefore it becomes important to differentiate between technology's most basic instrumental value (its function) and other values. Correspondingly, because technology raises so many general ethical issues, it appears to be important to debate such issues in an open manner. If not, many important choices about  values might be made through our choice of technology, and the technological imperative will appear to be a self-fulfilling prophecy. The danger is that the means justifies the end. Therefore, an open debate about values will promote a healthy society.

The myth of the technological imperative is by no means sufficient to escape our responsibility with respect to technology. On the contrary, our responsibility for technology is what makes us able to reject a technological imperative.

References

Cassell EJ: The Sourcer's Broom. Medicine's Rampant Technology. Hastings Center Report 1993 23; (6): 32-39.
 
Borge OJ. Fosterdiagnostikk og verdier. Oslo: Bioteknologinemnda. Rapport. 2004: 32-4. ISBN 82-91683-24-7.
 
Callahan D. The Goals of Medicine: Setting New Priorities. Hastings Center Report, November-December, 1996.
 
Dreyfus HL. Heidegger on Gaining a Free Relation to Technology. In: In K. Schrader-Frechette and L. Westra, Technology and values. New York: Rowman & Littlefield Publishers, 1997: 107-14.
 
Eiring Ø, Vollebæk L-E. Behandling på sviktende grunnlag. Dagens Medisin. 10.05.2001.
 
Hofmann B. On the value-ladenness of technology in medicine. European Journal of Medicine, Health Care and Philosophy 2001; 4(3): 335-345.
 
Hofmann B. Technological medicine and the autonomy of man. European Journal of Medicine, Health Care and Philosophy 2002a; 5: 157-67.
 
Hofmann B. Is there a technological imperative in health care? International Journal of Technology Assessment in Health Care 2002b; 18(3): 675-89.
 
Hofmann, Bjørn. Vi vil jo ha ultralyd! Om teknologi og verdier. In: Per Nortvedt and Åshild Slettebø. Etikk for helsefagene. Oslo: Gyldendal Akademisk 2006: 104-123.
 
Fosterdiagnostikk: Trygghet eller trussel. Debatthefte fra seminar om fosterdiagnostikk. Politisk notat 2, 2000.
 
Sejersted F. Teknologipolitikk. Oslo: Universitetsforlaget, 1998.
 
Smith MR, Marx L. Does technology drive history?: the dilemma of technological determinism. Cambridge, MA: MIT Press, 1994.
 
Sundström P. Interpreting the notion that technology is value-neutral. Medicine, Health Care and Philosophy 1998; 1: 41-45.
 
Tranøy K.E. ‘Ought' implies ‘can': A bridge from fact to norm? Part I. Ratio 1972; 14: 116-30.
 
Tranøy K.E. ‘Ought' implies ‘can': A bridge from fact to norm? Part II. Ratio 1975; 17: 147-75.
 
Tymstra T. The imperative character of medical technology and the meaning of "anticipated decision regret". In J of Technology Assessment in health Care1989; 5: 207-13.
 
Vos R. Drugs looking for diseases: innovative drug research and the development of the beta blockers and calcium antagonists. Dortrecht: Kluwer Academic Publishers, 1991.
 
Winner L. Autonomous Technology. Cambridge MA: MIT Press, 1977.
 
Shackleford B. The Technological Fix: How People Use Technology to Create and Solve Problems (review). Technology and Culture - Volume 47, Number 1, January 2006, pp. 246-248.
 

 
 1 It is important to notice that I do not refer to the explorative development of technology, but the large scale commercialization and marketing of technology before any applications are identified. In the explorative phase, where somebody pursues their interests or intuition to investigate certain physical phenomena or mechanisms, there is no clear end for the technology. The phenomenon investigated in this paper belongs to the phases following this initial, and often arbitrary, phase where there is an implementation of a certain technology on a broad basis without any well defined end.
 
2 With Habermas one could say that the instrumental rationality dominates (together with the strategic rationality) over the communicative rationality, which results in cultural impoverishment and colonizes our life world. As the teleological rationality of the means has set the standard, it reduces the possibility of discussing human ends. We cannot discuss the ends of our means with the language of the means.
 
3 For further details on the technological imperative in health care see (Hofmann 2002b).
 
4 Nomological determinism, normative determinism, and determinism due to unintended consequences, are but some examples of this (Smith & Marx 1994)
 
5 No other assessment than that it is a technology, and therefore good.
 
6 With respect to the relationship between technology and values, see (Shrader-Frechette 1994)
 
7 I have elsewhere argued that the relationship between technology and values is much more fine grained: technology challenges existing values, it promotes values, it displays values, technology hides values (Hofmann 2006).
 
8 Reproductive cloning is appears to be more morally challenging than therapeutic cloning, although the distinction may itself be seen as a rhetorical device in order to promote particular technologies (and values).
 
9 Although blue eyes and intelligence have been used rhetorically as examples, sex selection and the negative selection of milder diseases or impairments appears to be much more relevant.
 
10 It could be argued that one could use bacterial weapons to kill an alien or to spread vaccines. I owe this counter argument to one of the anonymous referees. However, if a particular technology would be use to spread vaccines, it would not be a bacterial weapon; it would be a vaccine spreading device.
 
11 Note that the term “function” is used differently here than in biology and social sciences. The technological function is intentional, whereas “function” in biology and social science tend to be non-intentional.
 
12 I.e. in what aspects technology is value-laden.
 
13 It is of course important to differentiate between different kinds of responsibility, e.g. on the personal level, on the group level (role responsibility), and on the societal level. It is beyond the scope of this paper to perform the analysis on all these levels respectively. Allow me only to indicate that I believe that it is possible to do so.
 
14 Some would make an even stronger claim, that once it has been invented, we cannot do anything but accept the technology.
 
15 Or more specifically: its safe implementation.
 
16 Instrumental values are extrinsic values, which together with other kinds of extrinsic value is opposed to intrinsic value.
 
17 One could argue that technology has inherent value because experiencing technology has intrinsic value. For some people appreciating technology may be intrinsically good. This does not make technology an instrumental good (because the good experience is directly related to technology and not an instrumental result of it).