Before I begin I would emphasise this is a gross simplification and I am narrowly discussing Euthanasia with a correlation to contingencies for extreme self experimentation.
Securing the legal rights is of vital significance within the H+ community – bodily autonomy, Morphological freedom and cognitive liberty; these are somewhat recurring themes. Something often overlooked (perhaps due to the messy nature of rhetoric) is the right to terminate oneself under specific circumstances; debatably the biggest indicator of bodily autonomy.
I would posit that the right to Active Euthanasia (Active Euthanasia i.e assisted suicide) is a favourable right to ensure transhumanist liberties in the future, and Risk Acceptance is the most viable metric for assessment.
I have an inkling that Libertarian Transhumanists would be prime supporters of assisted suicide but for Extropians and democratic transhumanists the issue is not clear cut. I would implore consideration to be weighed sooner than later. Primarily, there is an outright morphological freedom argument to be made – but contextually this correlates to termination under the individual’s calculation of utility and risk acceptance pertaining to extreme experimentation.
In simpler terms, if in the future, people choose to engage in extreme transhuman experimentation to the point of self-destruction, then those people should ensure they have the right to terminate themselves should the process/result become overwhelmingly undesirable (to the individual, barring affect on other parties).
Intuitively this is antithetical to the end goals of H+ Schools such as Abolitionism and Hedonism but I suspect this is contrary to the result, not the process itself.
When engaging with proactionary self-experimentation, risk control is a major concern – however Risk Acceptability to the individual is an overlooked proponent.
Health is not everything to everyone. This may be indicative of why transhumanism splits into so many families. Many transhumanists consider Happy > Healthy. This is not to say happy and healthy are mutually exclusive – but in cases where this applies, the risk acceptance should be in the hands of the individual. Similar to how we balance cosmetic surgery, in-vivo contraceptives and alcohol/cigarettes (albeit this is more complex re: addiction but still pertaining to a similar end).
Considering this, Risk Acceptance within Transhumanism experimentation (and within Euthanasia) is an overlooked and under stressed element. Precautionary medicine only accounts for magnitude and probability of risk but not a willingness to accept it. This Risk Acceptance (whilst not exclusively progressive) is our ethical responsibility regarding the conscious evolution of humans and I would prefer this remain at the individual’s discretion. I would posit this risk acceptability should be extrapolated to basic transhuman rights/technologies applying this to modifications but ultimately, accountability over risk of death – even at the hands on oneself in a worst case scenario.
On the grounds that Transhumanism has an element of conscious evolution, this risk acceptability when facing ‘genetic cul de sacs’ should explored vehemently as a turning point of importance. Control over one’s autonomy – including the right to die should be recognised as exceedingly important especially when pertaining to future circumstances resulting in unforeseen consequences. Of course, I wouldn’t espouse this to facetiously – it’s self-evidently complex – there should be a barrage of unincluded caveats in this post. But currently I don’t believe our societal and political infrastructure supports potential contingencies needed for realistic ethical self experimentation – hence why it is seen as somewhat ‘underground’.
The specific unforeseen consequence that sparked this post was the hypothetical experimentation of a neural lace. Frankly – if I was to engage in preliminary testing (of which I am willing under certain requirements) and the result is akin to a digital lobotomy, I would want an exit plan. If religion or political beliefs can intercede on an individual’s treatment, it seems fitting philosophical beliefs be extended the same degree of importance. For me, a Digital lobotomy is a fate worse than death, but I would be willing to undertake the risk in regards to a wider goal (dependant on probability of failure of course).
Understandably the medical system, from a utilitarian perspective, can’t be expected to just give resources to people that willingly hurt themselves (or at least those who are intelligent enough to realise what they are doing). But in the response to this, there is a much greater risk here associated with the utilitarian approach to medical treatment on these grounds as a zero-tolerance policy is unrealistic.
If the medical industry can’t/won’t help – two scarily realistic alternatives involve,
A) Underground industry potentially leading to outright dangerous scenarios.
B) Private industry leading to particularly absurd insurance costs to negate risk management.
The later pertains to great importance re: a class divide via tech, but I digress.
Upon reaching this point – my transhumanist ethics come into direct question – for each school of transhumanist thought, ethical arguments can be made. The acceptability of options A and B are dependant on the school of transhumanism one subscribes to; I can see this hypothetical being useful in deciphering where one’s intentions lie. For the sake of discussion, widespread medical acceptable for risk accountability in these circumstances is Option C.
Intuitively, I can consider A and B to be a wastage of public and personal utility. However, I believe this may be entirely too narrow spectrum of thinking.
To the Democratic transhumanist – option C is desireable as the overall process involves the average joe having an opportunity to ‘keep up’. I would posit currently, this is the most synchronous to the current medical industry (or at least how it’s presented in Australia).
To the Libertarian transhumanist – option A (or B) may be the most ethical path. Ignoring the notion libertarians to a degree reject government regulations, these options allow for growth outside regulation and legality. Zero tolerance is unrealistic hence underground culture is almost entirely independant, whilst private industry can support growth and ultimately influence law as opposed to law influencing growth.
However, to the Extropian I imagine it could be entirely dependent on the context of all options. Options A and B provide an environment closest to a ‘Transhuman arms race’ – this gives the individual willing to take the biggest risk the biggest advantage. It seems amenable that Extropians support whatever provides the human race with ‘the best tech’ regardless of political circumstance. A hardcore extropian may not consider mandatory government ‘upgrades’ as a bad thing if a few years of suffering ultimately improves the human condition undeniably for sequential generations.
I expected transhumanist philosophy to give me grounds for a more decisive argument, it has only served to demonstrate how contextual the argument is. Each Transhumanist theory does provide insights into favorability and probability of potential transhuman rights, but only once enough variables are controlled that the hypothetical becomes practically useless. Only the subsequent generations of Transhumanist Anthropologists can make judgement on literal context. We are in the eye of the storm.
To reiterate and conclude, I cannot wholeheartedly agree with a generalisation regarding Euthanasia within Transhuman rights but I would press contextual importance. Predictably, the development of Risk Acceptability will be an organic process and it would be wise to avoid and homogenisation of opinion amongst Transhumanist philosophies – as this would result in the homogenisation of evolutionary potential. I believe the most useful tool in avoiding this would be for the individual to establish their metric for Risk Acceptance and to attempt to secure your own right to die under certain circumstances – realistically, this is the only way to organically grow the acceptance of this choice without impacting others.