Thursday, December 17, 2015

The Doomsday Invention

Will artificial intelligence bring us utopia or destruction?

BY 

New Yorker, Nov.23, 2015

Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude.

Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. He sometimes notes, as a point of comparison, the trajectories of people and gorillas: both primates, but with one species dominating the planet and the other at the edge of annihilation. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he concludes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

At the age of forty-two, Bostrom has become a philosopher of remarkable influence. “Superintelligence” is only his most visible response to ideas that he encountered two decades ago, when he became a transhumanist, joining a fractious quasi-utopian movement united by the expectation that accelerating advances in technology will result in drastic changes—social, economic, and, most strikingly, biological—which could converge at a moment of epochal transformation known as the Singularity. Bostrom is arguably the leading transhumanist philosopher today, a position achieved by bringing order to ideas that might otherwise never have survived outside the half-crazy Internet ecosystem where they formed. He rarely makes concrete predictions, but, by relying on probability theory, he seeks to tease out insights where insights seem impossible.

Some of Bostrom’s cleverest arguments resemble Swiss Army knives: they are simple, toylike, a pleasure to consider, with colorful exteriors and precisely calibrated mechanics. He once cast a moral case for medically engineered immortality as a fable about a kingdom terrorized by an insatiable dragon. A reformulation of Pascal’s wager became a dialogue between the seventeenth-­century philosopher and a mugger from another dimension.

“Superintelligence” is not intended as a treatise of deep originality; Bostrom’s contribution is to impose the rigors of analytic philosophy on a messy corpus of ideas that emerged at the margins of academic thought. Perhaps because the field of A.I. has recently made striking advances—with everyday technology seeming, more and more, to exhibit something like intelligent reasoning—the book has struck a nerve. Bostrom’s supporters compare it to “Silent Spring.” In moral philosophy, Peter Singer and Derek Parfit have received it as a work of importance, and distinguished physicists such as Stephen Hawking have echoed its warning. Within the high caste of Silicon Valley, Bostrom has acquired the status of a sage. Elon Musk, the C.E.O. of Tesla, promoted the book on Twitter, noting, “We need to be super careful with AI. Potentially more dangerous than nukes.” Bill Gates recommended it, too. Suggesting that an A.I. could threaten humanity, he said, during a talk in China, “When people say it’s not a problem, then I really start to get to a point of disagreement. How can they not see what a huge challenge this is?”

(Continues here...)

==
Readers reply in Dec.21/28 issue:


Bostrom video




Saturday, October 24, 2015

Tuesday, July 21, 2015

"Memory Enhancement: A Perspective within Bioethics"

Posted for John Holloway

“There is no such thing as philosophy-free science, just science that has been conducted without any
consideration of its underlying philosophical assumptions. “ – Daniel C. Dennett, Intuition Pumps and Other Tools for Thinking

Memory enhancement modern day via nootropics, or brain-enhancing drugs, has been swelling
since Piracetam in the 60’s with a potent subtlety.  While this movement of sorts has quickly washed
over the modern day college student, and other groups, the waters grow increasingly muddled as the
number of nootropic users rise. I would like to discuss some of the risks and dangers of nootropic use, the disparaging fairness to differing socioeconomic levels, neurochemical repercussions that are slowly but very surely being included in the literature, and how I believe these substances could make for a misrepresented level of functioning among seemingly untraceable individuals in populations where such functioning is rewarded and admired.  

Nootropics drugs can increase memory encoding and recall, overall mental acuity, brain-derived
neurotrophic growth factor, attention and even analytical ability (variably and respectively.)  There are nootropic drugs that have expansive mechanisms of action and are widely available without prescription or clinically noted cognitive defect. Primarily, these drugs work to increase hemispheric communication, raise levels of availale acetylcholine (acetylcholinesterase uptake inhibitors, the racetams), antagonizing NMDA receptors (Namenda), or simply boosting attention with substances like amphetamines so more astute and fine-cut observations/connections can be established while
distractibility is minimized. Taking a step back to the availability, is it fair that some people can afford quality nootropic substance while others may not have the means to do so, or may only be fiscally prepared for a substance from an disreputable brand? What kind of edge do the people who are knowledgeable on, and can afford, these products have on others who are oblivious to their existence or don’t have the means in the classroom? What about standardized tests? There is a lofty
misrepresentation of the thought-action repertoire and overall capabilities of an individual’s mental
faculties that can occur in someone taking these drugs- and this misrepresentation can make for a slew of ethical concerns. I suppose misrepresentation would only be appropriate if the drug works when you decide to take it, but holds no foregoing benefits of use after discontinuation.

Often an oversight in this area is that of long-term nootropic use with improper cycling. While
with certain nootropic families who have been around for decades (the racetams, piracetam in
particular) and have no reported adverse effects in chronic use, other newer formulas have serious
potential adverse effects on one’s biochemistry as a whole. For example, Huperzine A, or huperzia
serrata, a Chinese club moss is popularly thrown into nootropic blends and sold over the counter
without proper labeling of the risks or instructions of proper cycling. Huperzine A, an acetylcholine
esterase uptake inhibitor, breaks down the enzyme that removes acetylcholine (holds a strong role in
learning acquisition and memory) from the cell cycle after it binds to its postsynaptic receptor. In
individuals who are “healthy” and take Hup A, this means the acetylcholine continues to be recycled
through the cell cycle and builds up in comparative abundance, eventuating into more cholinergic
postsynaptic receptors being constructed, so more acetylcholine can take effect at once. What
eventually happens through daily use (it should be cycled), is the levels of acetylcholine become so high that its very production is slowed (downregulation). As the production of your endogenous (naturally produced within the body) acetylcholine is slowed, you are also building a tolerance to the Huperzine A. What do you do when a medication stops working? Most people will either increase the dose or switch medications, but increasing the dose here will simply exacerbate the precise issues I am writing of. The combination of ACh downregulation, tolerance build up, and ceasing the medication can result in major cognitive impairments in the way of learning acquisition and memory encoding/retrieval as now what little ACh is being produced is only binding to a fraction of the available postsynaptic receptors, plus the acetylcholine is leaving the cell cycle as opposed to being recycled through, making for a large leap away from our body’s natural homeostatic state. We now have very little acetylcholine being produced, our memories are shot and the question arises of is your motivation/willpower strong enough to push through these defects and into the waters of intellectual pursuit to try- pressuring your brain through intensive stimuli and hurdles of analytics to create more acetylcholine, or upregulate it- and succeed with a likely fickle and tentative neurochemical state?

Moving to the future with genetic enhancement of memory, in Sandel’s The Case against Perfection, it was discussed how researchers produced “smart mice” via mouse embryos containing extra copies of a memory-related gene. The impressive, and terrifying, factor to note from this research was that the mice passed their enhanced cognitive abilities onto their offspring. Sandel says “such use would straddle the distinction between remedy and enhancement.” With respect to the concern noted in Sandel’s book, I believe two subspecies of human may very well be a logical eventuation of the cognition enhancement wave- especially with the increasing disparity between socioeconomic levels.
Should we even aspire for such use of our bioengineering capabilities? My theory, being that we are
very social creatures and the peer influence effect is ever so potent, humans will continue to do what
makes them feel good and what they see makes their closest friends feel good. Nootropics could
become the “drug of choice” for a swelling mass of individuals who take pride in perceived cognitive
capabilities over sedation or transcendence-abating stagnancy. This could be especially so if in a world where the population could be seemingly splitting in such a way Sandel spoke of. I believe most with a healthy prefrontal cortex would find the means to become “enhanced.” I say this, just as many impoverished find the means to get “high” in today’s society. Who knows, with so many bold and astute intellectuals bumping heads, the drive to succeed seems likely to skyrocket in such a scenario. Assuming the nootropic game isn’t streamlined in the future where everyone is taking the same concoction, many people will be enhanced with respect to specific areas of cognition, but not all will be the same. Certain jobs are particularly catered to different modes of cognition, like a taxi driver’s need for spatial awareness and memory. While the idea the opposites attract is but a myth, it means we as humans generally like people who are similar to us- but not necessarily similarities with respect to how much we can memorize. Though, we also tend to like people who possess traits we ourselves wish to possess, so in that way the danger of two subspecies of human (intellect) developing may whisk about in a world where people aspire to be sharp-minded. The average Joe with a baseline working and short-term memory taking an Icelandic vocabulary test will do better than “enhanced joe” who took a nootropic, one that has a mechanism of action working on GABA receptors for test-anxiety relief and enhanced concentration (due to less rumination), because joe is likely to be inducing impairments to his memory via the sedating influx of GABA or GABA agonists.

Will we reach a point where in order to keep up with the rat race everyone must ingest some
form of brain-boosting compound, or otherwise be deemed lowly or seen in an inferior light of sorts
when it comes to matters of the mind? Even more problematic, isn’t everything we do on a day to day
basis a series of personal choices, so all of our thoughts and consequential behavior are a “matter of the mind” in essence? Maybe, however, your endogenous levels of acetylcholine are actually too low, and you need the boost to function at some level of equivalency to others of your age of your level of
education. Well, first off, how do you know past pure speculation or experience of a subjectively
withered mental state that you are deficient in some regard? We don’t quite have the tests to
accurately measure neurotransmitter levels- while claims are made on this topic and some tests offered- I believe, in line with the literature, you cannot accurately measure these levels through blood serum.

How would you know where you stand in comparison to others with regard to mental functioning?
Neuropsychological evaluation, sure, but in all reality this wouldn’t be feasible for all, or many.
So what, then, should be the cause for pause or the green light when it comes to supplementing
with these “smart pills?” Should it be done at all? To feel better about yourself in that you function at a perceived higher level of cognitive faculties than you once did? Than your competition currently does?

We do many things to feel better about ourselves, to patch up invisible defects within our homeostatic
state, but nootropics bring about that point where feeling better meets with being psychologically more resilient through a widened spectrum of thought and perception on matters at hand and to come. There could likely be a perceived lacking of innate abilities, I should say, and a danger is exactly that. If one feels a lacking, sometimes a pill of any sort, or a prayer, etc. is enough to make the lacking dissipate into feelings of contentment with one’s own self. A placebo- it can be anything, not just some inert pill. Let’s talk about, not the potential of two subspecies of human, but rather a perceived split-self. Instead of filling a perceived lacking you may possess (you #1) by examining what areas of your life may be lacking in attention, why not take the drug that will make you smarter and more equipped to handle the situation? So, the novel smarter you (you #2) comes about as the nootropics take effect, or even before, and this creates two semi-fixed points on two different spectrums of self. You held consistency of self in your non-nootropic state (barring experiments in inebriation) until you switched gears into the realm of perceived higher mental functioning you didn’t think you were capable of, and the two selves dance and glance about one another, fighting for prominence of being. When not on the nootropics, an intellectually-trying situation may arise where instead of comparing your perceived detriments of cognition to other individuals’ seeming superior abilities, you may now also compare your perceived detriments in an area to something you #2 could have handled with ease. 

The science is far too scarce on the collateral effects, or undesired effects, these drugs may have
in the long-run. Further, if it is true that these drugs can induce a profound deviation from the body’s
homeostatic healing mechanisms even after having been ceased, for the better or worse, are you, as
Hesse would say, “living in accord with (your) true self?” Or James, living in synch with your “habitual center of personal energy?” If an individual approaches you and tells you of a, truthfully, very brilliant idea- one he later says arose after taking a nootropic substance- would you attribute a greater degree of the brilliance to this individual or one who had the same unprovoked idea, by means of his ‘natural’ mind? An issue here within the field of AI would be known as “credit assignment.” Whom, or what, do we attribute the great idea to? If an employee receives a promotion due to landing 4 accounts in one week while experimenting with nootropics, should the individual’s boss be made aware of the substance use? Can he make the promotion contingent on a necessitated continuation of his nootropic regimen?

Further, if individuals use these substances to get ahead in the job market, it is possible one may simply feel obligated to remain on the nootropic they used throughout college. Possibly to impress an
interviewer for a job in your field with “enhanced” mental fortitude and veracity, in fear of a
consequential subpar quality performance of work.

Well the waters start to get muggy when you also consider the fact that by definition, food in
general is a drug. Many foods- like blueberries, lion’s mane mushrooms, sauerkraut, and wheatgrass-
have nootropic effects. Anything consumed that benefits the body nutritionally will inevitably aid in
brain functioning via the very pertinent gut-brain connection, where constant communication is taking place. The gut is considered our 2nd brain, wherein resides close to the number of neurons as your spine possesses, so we are impacting far more than our hippocampus or prefrontal cortex (or other ideal/designated areas) when we consume a nootropic substance, or any substance for that matter, which is important to know but rarely said. A distinction I would make, however, is the literature on nootropics compared to the literature on whole foods would note a statistically significant profundity in psychoactive effects with respect to attention, etc., that is rarely experienced of similar degree after consumption of blueberries. All of this to note that nutrition and whole foods aren’t offering the, generally, instantaneous and flagrant cognitive benefits that nootropics offer- so they are surely something we should examine further, no?

It would seem that, as is typical, knowledge of the realm is pertinent to travelling safely within
its parameters. Those who know how to use these drugs so downregulation (for example) of crucial
neurotransmitters is avoided, those who know where to get the highest quality for the best price, those who know that even after chronic exposure to the substance and a ‘proper’ cycling of your brain-boosting regimen, there is still a chance that one may incur some serious cognitive defects because every individual is different--- these are the ones who will be of a more-informed, and consequently (arguably) superior (in some respects), stature with risk variably but significantly more minimized than those approaching these substances expecting intelligence in a pill with the blind faith positing that if it weren’t safe it wouldn’t be sold over-the-counter, or so popularly used (or abused.) There must be a certain level of knowledge acquired before one can safely swim these waters, just as for driving a motorcycle or knowing how much sugar is too much. If nootropics do become commonplace, considering the multitude that exist nowadays, harm reduction must be priority.

Friday, June 12, 2015

Inspiring senior citizens

Prompted by our reading last semester of Atul Gawande's Mortality, I've started a new collection. Future finds will take up residence in the right sidebar (do a "Ctrl-F" search for "inspiring")...

Thursday, June 4, 2015

The goals of medicine

The 50th anniversary of Medicare next month is an opportunity to consider what the goals of medicine should be in our aging society and how we want to live in relation to medicine’s evolving tools.

There has been a revolution in medicine and in patient expectations since President Lyndon B. Johnson signed Medicare health insurance into law on July 30, 1965, setting off the often contentious debates about cost control, rationing and privatization now dominating the public conversation about health care. To craft Medicare’s best possible future, however, it may be more productive to focus on the kinds of health care older Americans are actually receiving and are claiming to want.

Consider how much patients, doctors and treatments have changed since 1965:

The United States population is growing and aging. In 1965, fewer than 10 percent of Americans were 65 and older; fewer than 1 percent were 85 and older. Today those figures are 13 percent and 2 percent, respectively, and the latter cohort is the country’s fastest-growing age group. By 2030, people 65 and over are projected to represent 20 percent of the total United States population.

Our aging population has become increasingly medically sophisticated, risk aware and demanding about treatments. The paternalistic physician who knew the patient and family and “made the decisions” into the 1960s has been succeeded by a culture of patient autonomy, hospital-centered medicine and the new importance of medical teams.
(continues)

Monday, May 18, 2015

Oliver Sacks

Andrew Solomon:

Medicine is dominated by the quants. We learn about human health from facts, and facts are measurable. A disease is present or not present; a reckonable proportion of people respond to a particular drug; the inability to predict gene-­environment interactions reflects only a failure to map facts we will eventually be able to determine; and if the observable phenotype varies for an established genotype, the differences must be caused by calculable issues. In this version of things, the case histories that constituted most of medical literature up to the early 20th century reflect a lack of empirical sophistication. Only if we can’t compute something are we reduced to storytelling, which is inherently subjective and often inaccurate. Science trades in facts, not anecdotes.
 
No one has done more to shift this arithmetical naïveté than Oliver Sacks, whose career as a clinician and writer has been devoted to charting the unfathomable complexity of human lives. “All sorts of generalizations are made possible by dealing with populations,” he writes in his new memoir “On the Move,” “but one needs the concrete, the particular, the personal too.” The emergent field of narrative medicine, in which a patient’s life story is elicited in order that his immediate health crisis may be addressed, in many ways reflects Sacks’ belief that a patient may know more about his condition than those treating him do, and that doctors’ ability to listen can therefore outrank technical erudition. Common standards of physician neutrality are in Sacks’ view cold and unforgiving — a trespass not merely against a patient’s wish for loving care, but also against efficacy. Sacks has insisted for decades that symptoms are often not what they seem, and that while specialization allows the refinement of expertise, it should never replace the generalism that connects the dots, nor thwart the tenderness that good doctoring requires. A reasonable corollary to the Delphic injunction to “know thyself” is to know thy patient, and few physicians have devoted themselves more unstintingly to such inclusive knowledge than Sacks. Patients want coherence, which can be achieved only when the contradictory essentials of experience are assembled into a fluid account. The doctor must not only listen, but also process what he has heard.

Sacks’ interest, however, is not merely in helping his patients construct their stories, but also in recounting them to the rest of us. The ethics of that undertaking have often been questioned... Continue reading the main story

Related in Opinion - Op-Ed Contributor: Oliver Sacks on Learning He Has Terminal Cancer

Friday, May 15, 2015

Health & gender, mistakes etc.

TED News (@TEDNews)
How Paula Johnson's TED Talk helped create a wider understanding of gender differences in health t.ted.com/57cM6Vy

No Longer Wanting to Die  A therapy technique I had never heard of helped me deal with the depression and anxiety that threatened to end my life.

The New Yorker (@NewYorker)
One of Britain’s foremost neurosurgeons wrote a memoir about the mistakes he has made and the patients he has failed: nyr.kr/1KY2O73

Thursday, May 14, 2015

Edward Jenner

It was on this day in 1796 that the doctor Edward Jenner inoculated an eight-year-old boy with a vaccine for smallpox, the first safe vaccine ever developed.
Jenner was a country doctor and surgeon in the small town of Berkley, England, where he had lived for most of his life. The only time he’d ever been away from Berkley was when he studied for a few years at a hospital in London. It was there that he learned the basics of the scientific method, experimentation and careful observation. The job of a country doctor involved a fairly rudimentary treatment of injuries and illness, but Jenner thought he might be able to put the scientific method to some good use.
The most devastating disease in the world at the time was smallpox, a disease that caused boils to break out all over the body. It killed about one in every four adults who caught it, and one in every three children, and it was so contagious that most human beings in populous areas caught it at some point in their lives. During the 18th century alone, it killed about 60 million people.
In the mid-1700s, British doctors had imported a procedure from Asia in which healthy people were deliberately infected with smallpox through the skin, which brought on a milder form of the disease and then immunity. The procedure was called “inoculation,” after the horticultural term. Inoculation wasn’t practical, because inoculated patients could pass the disease onto others while they were showing symptoms, and some inoculated patients developed the more severe form of the disease and died.
Jenner wanted to develop a smallpox inoculation that wouldn’t harm anyone. He worked in a place with a lot of dairy farmers, and there was a rumor that milkmaids almost never caught smallpox. Jenner realized that the milkmaids had all suffered from disease called cowpox, which they’d caught from the udders of cows. Jenner had a hunch that the infection of cowpox somehow helped the milkmaids develop immunity to smallpox.
Jenner decided to take some of the fluid from a cowpox sore and inject in into a healthy patient. There were no laws governing medical experimentation on human subjects at the time, but Jenner still had some reservations about trying his ideas out on a person. He mulled it over for years, and then finally decided to go ahead. On this day in 1796, he gathered some cowpox material from an infected milkmaid’s hand and injected it into the arm of an eight-year-old boy named James Phipps.
The boy developed a slight headache, and lost his appetite, but that was all. Six weeks later, Jenner inoculated the boy with smallpox, and the boy showed no symptoms. He had developed immunity from the cowpox.
Jenner submitted a paper about his new procedure to the prestigious Royal Society of London, but it was rejected. The president of the Society told Jenner that it was a mistake to risk his reputation by publishing something so controversial.
So Jenner published his ideas at his own expense in a 75-page book, which came out in 1798. The book was a sensation. The novelist Jane Austen noted in one of her letters that she’d been at a dinner party and everyone was talking about the “Jenner pamphlet.” The procedure eventually caught on, and it was called a “vaccine” after the Latin word for cow. It wasn’t perfect at first, because of poor sanitation and dirty needles, but it was the first time anyone had successfully prevented the infection of any contagious disease.
What made it so remarkable was that Jenner accomplished this before the causes of disease were even understood. It would be decades before anyone even knew about the existence of germs.
Writer's Almanac

Oxford Academic (@OUPAcademic)
Edward Jenner: soloist or member of a trio? oxford.ly/1HlUL5z by Anthony R. Rees #medicine

Tuesday, May 12, 2015

Ethics ethics

If you agree with Atul Gawande about "our job in medicine" being the production of well-being, to help people flourish, then bear in mind that there is ethical life after Bioethics - starting in the Fall with PHIL 3160, The Philosophy of Happiness - TTh 4:20 pm, BAS S279. (Sorry, couldn't resist one last commercial.)

"Ethics ethics" (including PHIL of HAP) is all about the quest for well-being, and the good life.

Ethics Ethics

Have a great summer, everybody. Drop me a line, let me know how you're doing. Live long and prosper.

Thursday, May 7, 2015

Standing by

I don't think Nigel's talking about grading here.

Nigel Warburton (@philosophybites)
If things get bad, I'd like a doctor like Freud's standing by with an overdose of morphine.theguardian.com/commentisfree/ #assisteddying

Also of note: 

Company Creates Bioethics Panel on Trial Drugs (celebrity bioethicist Arthur Caplan in the news)


Johnson & Johnson named the bioethicist Arthur L. Caplan to create a panel to decide on patients’ requests for lifesaving medicines before they are approved.
The New Yorker (@NewYorker)
.@Atul_Gawande examines America's epidemic of unnecessary care: nyr.kr/1IH8eEapic.twitter.com/5QKNzneR4j


"The Far Shore of Aging"  w/Jane Gross, founder of nyt "New Old Age" blog - On Being w/Krista Tippett

The Last Day of Her Life When Sandy Bem found out she had Alzheimer’s, she resolved that before the disease stole her mind, she would kill herself. The question was, when?
==
 30 minutes ago
“I can’t imagine looking back on my life not having given this to her.” Tonight on :


Exclusive: Meet the world’s first baby born with an assist from stem cells

==
When Doctors Help a Patient Die. The patient was terminally ill. He had decided to end his life under his state’s “death with dignity” law, and his doctor prescribed the medication he would use to do it. But his death was unexpectedly delayed because he drank a large soda before taking the medication — an ordinarily lethal dose — and it apparently interfered with the drug’s absorption. I’ve been told that patients who want to die are now warned not to drink carbonated beverages before or after taking the medication.
In another situation, a physician assisting in a death for the first time prescribed less than the recommended dose of the lethal drug. Although the patient died, it might have been otherwise. And an A.L.S. patient who requested the prescription from his physician met one criterion (having a terminal illness) but not the second (prognosis of six months to live). These are just some of the unexpected wrinkles that have come up in the still-new world of physician-assisted death... (continues)

Steroids (Posted for Ramsey Ferguson)


Blog Post (1 of 3)

I would like to use my final blog posts to reopen the discussion on steroid use/abuse. I know the midterm reports were very long winded and even then I feel like some people may have been tired of one group talking and not gotten possible questions answered.  I’ve read up on it a lot (admittedly on the internet so how reliable is that), but I also heard/know of several personal accounts of people using anabolic steroids. The biggest issue I see with people using steroids, aside from the fact that they are illegal, is the abuse where they do not cycle off properly. Many people use steroids with little or no unwanted side effects when they run a reasonable length cycle, and when they come off of them and stay clean for long enough to let their body get a break from them.  These same people could possibly face complications down the road, but a lot of the big cases you hear about where there are serious side effects come from years of steroid use with very little or no cycling off. Another serious problem with steroid use today is that the correct dosage/cycle length really is not set in stone. This makes me pose the question  ‘If people are going to use steroids (legally or not), and they can be used without people suffering serious side effects, then should research be done and credible resources be available to those to help minimize the risk associated with steroid use?’ This idea would gain a lot of opposition because if you published something informing people of what a bad cycle consists of (dose, length, stack, specific steroid) and warned them against it then some may take that as you promoting the “safe” use of steroids. I do believe, however, that as steroids are gaining popularity among more common people and not just the big bodybuilders that some type of education could be utilized and possibly help us with healthcare costs down the road by lowering associated health risks in this fashion.     Let me know what you think!!

Ramsey Ferguson
Post 2 of 3
This post will focus primarily on the Anabolic Steroid Control Act of 1990.  This is where congress declared anabolic steroids a schedule III controlled substance.  This puts steroids in the same class as Vicodin, LSD precursors, and some veterinary tranquilizers.  There is a specified difference between charges on personal use and intent to distribute, but this can be skewed sometimes because while many drugs are bought and sold in small amounts where it is more easily determined whether there is intent to distribute or not, that is not the case with steroids. Steroids are generally bought per cycle or per couple cycles.  This means that an individual could have massive amounts of steroids for personal use of one or a couple cycles and it would be hard to differentiate between personal use and intent to sell.   Many medical professionals from the FDA, DEA, National Institute on Drug Abuse, and even the American Medical Association were called on to speak at the congressional hearings leading up to the Steroid Control Act of 1990. Their evidence and arguments were disregarded when congress didn’t hear what they wanted to hear. These professionals didn’t agree with anabolic steroids being classified this way based on medical evidence, statistics, and personal accounts, but the scare of steroids was enough to override the evidence presented to them. That doesn’t make much sense, but time and time again throughout the semester we have looked at examples of how what people don’t understand scares them, and often times they are too stubborn to look at the facts that lay before them and see that some claims don’t match reality. The studies done on anabolic steroids seem to point towards the same conclusion that the mental risks are greater than the physical risks when taking steroids.  There have been several cases where a person committed suicide after taking steroids, but that makes me wonder if the underlying depression or causes of suicide where there prior to taking the steroids.  Maybe those psychological issues led to them being unhappy and taking steroids because they believed that an enhanced physique would bring them happiness? That is purely speculation, but does seem viable.
Ramsey Ferguson
https://thinksteroids.com/articles/anabolic-steroid-control-act-wrong-prescription/


Post 3 of 3
For my third blog post I want to supplement the first 2 with some before and after pictures and pictures of some side effects.   What is interesting that some people may not realize is that many people take steroids without the desire of being some huge Arnold-like body builder.  Many people take them to go from small or average, to just well toned and bigger than average.  If you start off as a big, lean muscled up person and then take steroids then you are obviously going to get bigger and maybe leaner, but my point here is that it,s not just bodybuilders taking anabolics there are many average people just like you and me that take them to enhance their appearance and help them achieve that beach body that they’ve always wanted.
transformation2
    In this picture you can see that this guy started I would think about average for a guy that hits the gym.  After an 8 week cycle you see him on the right much leaner and a lot more muscle mass.  Now if you saw the guy on the right walking around campus you probably wouldn’t jump straight to the conclusion that he has been taking anabolic steroids.  That physique can be achieved naturally, but instead of an 8 week transformation it may take a year or so.
Here are a few more before and after pictures
Люди, поменвшие свой облик. Часть 2. (50 фото)
Люди, поменвшие свой облик. Часть 2. (50 фото)
Люди, поменвшие свой облик. Часть 2. (50 фото)
You can see how not all of the guys look like Arnold Schwarzenegger after taking steroids, some of them have bodies that can be achieved naturally without steroids, it only takes about a quarter of the time and effort.
http://beststeroidscycle.com/

Wednesday, May 6, 2015

Transhumanism and Personhood (Devin Atkins)

(Posted for Devin Atkins)

Transhumanism and Personhood

            Transhumanism is fundamentally about transforming humanity
into something post-human. To discuss a bit of ethics behind this, we
have to clean up some language. The first is distinguishing a
difference between human and person. The former describes Homo
sapiens. The latter is what we’re more interested in and is much more
important. Now we could go into a very very very long post into the
definition of personhood, but for now let’s try to keep it simple with
some common criteria without going too far into specifics. Virtually
all humans have personhood: we are self-aware, can learn, and have
higher cognitive skills. There are, however, some exceptions, such as
humans in a vegetative state / braindead. There’s no cognitive actions
happening, and no self-awareness, just a body mechanically alive,
pumping blood. What’s less often thought about, however, is the idea
of non-human persons. This could be as simple as intelligent alien
life: think Vulcans from Star Trek. Sure, they aren’t human, but I
doubt anyone would argue Spock isn’t a person.

It gets trickier when we look at artificial intelligence, where we
also hit another word to more clearly define. Artificial literally
means made by a person, however we often think of it in a sense of
fakeness (such as artificial flavors). So artificial intelligence is
not “fake intelligence” but “designed intelligence” (as opposed to
naturally evolved). Going back to how this relates to personhood, we
can again look to Star Trek for a great example of an AI with
personhood: Data. Most would consider him a person, as he can also
learn from mistakes, has higher cognitive skills, and is self-aware.

I use examples like this because we are familiar with them and
comfortable; they are easy to accept if you’ve seen a few episodes of
the show. Even if you haven’t, there are plenty of similar examples
that are easy to draw from. But here the ego starts creeping in: what
does this have to do with humanity? Well we don’t look down on Spock
or Data for not being human. They are people, just in a slightly
different way than us. Then why should we look at our hypothetical
altered selves any differently? So often the criticisms of genetic
engineering, cybernetic implants, and all that jazz are the sense of
something lost. There’s a general thought that by fundamentally
altering humanity, we are lesser in some way for it. But this is a
double standard: doesn’t Spock have different genes from humans?
Doesn’t Data have a synthetic brain rather than an organic one? If we
keep characters like this in mind, it’s easy to see that humanity
changing into something post-human does not mean it has lost out on
personhood or some moral fiber. It’s simply another step towards
improvement.

I wish I could have gone into more detail, really fleshing out ideas
behind genetic engineering, artificial intelligence, and cybernetic
implants. These posts, however, have already gone well over the usual
range, and we discussed a good bit of one of those topics in class, so
the point of this was to take a step back. We worry so often about
smaller details that are an issue right now, but sometimes it helps to
instead look hundreds of years into the future instead of simply
decades. To say, hypothetically, if we mastered it, is it right? Or is
there something fundamental at the core of transhumanism that is
wrong? And so for these last 3 blog posts I focused on parallels
between it and problems in the past, what’s happening now, and how it
could look in the future. Hopefully you’ve gained some perspective on
current developments in science and see that limbs and heartbeats and
even brains don’t make you a person, just your mind.