chemical education

Common Student Difficulties in Organic Chemistry

While cleaning out my newly assigned “war room” (the setting where I’ll strategize on how best to torture students this fall), I came across some fairly interesting documents that were buried in far corners of crowded file cabinets.  They’re nothing personal or discriminating (sorry TMZ), but I saw them as material I could use in upcoming classes.

One of the several I found, titled “Common Student Difficulties in Organic Chemistry,” caught my attention more than the others.  The document, which appears to have been assembled using a typewriter (for the unfamiliar, you can find information about typewriters here), lists problems students encounter while navigating through the dreaded “O Chem”.  In any case, at the bottom of the page, in bold, is the following message:

If you start to get into trouble in this course review this sheet.  Knowing what has gone wrong allows you to fix it.

This closing interested me from a historical perspective.  Did enough students bomb the course to warrant this document’s assembly?  Did the professor discover this or a similar list at an ACS meeting and felt it was prudent to include it in his/her course?  Did the document actually help students better understand the course material?

Although I can speculate until the cows come home, I’m throwing it out to you, the blogosphere.  Do you agree with this list?  Would you change anything on it?  I’m curious to see what the blogger generation thinks (FYI, I believe this list was developed in the 1980’s).

  1. Lack of organization
  2. Difficulty in keeping up with lecture while taking notes
  3. Failure to finish exams
  4. Inability to manipulate three-dimensional structures on paper
  5. Too little drill – lack of repetitive practice
  6. Falling behind
  7. Poor problem analysis
  8. Inability to see and mentally manipulate three-dimensional objects
  9. Insufficient energy and/or motivation for the challenges of this course

Your Academic Lineage

Over dinner the other night, my uncle and I started comparing and contrasting our academic experiences.  He’s a fascinating person who earned a bachelor’s degree in computer science in the late 1970’s.

After discussing the finer points of Moore’s Law, and how he agonized over purchasing a 20 MB hard drive in the 1980’s for $400, the substance of the conversation switched.  “Have you ever researched your Ph.D. lineage,” he asked.

“I’ve gone as far back as Breslow,” I replied, completely forgetting that he probably didn’t know this “Breslow” character.

It turns out that several of his doctoral computer buddies had recently taken on this task, many of them somehow descending (academically) from Charles Babbage.

Our discussion prompted me to further examine my background.  I soon discovered that there are several University websites that provide chemistry academic lineage for their faculty members.  Being an organic chemist, I was interested to learn that E.J. Corey worked for John Sheehan (I admit it…I’m nerdly).  In any case, here are some websites I found interesting:

The Good, The Bad, and The Ugly

Does anyone else have a difficult time trying to separate “good science” from “bad science”?  I’m a very black and white person.  I love facts and truths and logic, and that drives most of my family crazy.  Perhaps that’s why I struggle with identifying bad science; there’s seemingly no clear-cut, concise way of identifying junk that ends up published.  To be clear, I’m not talking about retractions for blatant disregard for scientific ethics.  I’d classify these situations (e.g., the Xenobe controversy, Sames’ retractions, Bell Labs, etc.) as “ugly.”  I’m particularly concerned with cases where during a presentation everyone sort of looks at each other, raises his/her eyebrows, frowns, and collectively mumbles, “Hmm.”

It seems the term “junk science” has been in use in the legal profession since the 1980’s.  Yet, despite its existence, “junk science” is actually an ambiguous concept.  In 1998, legal experts Edmond and Mercer attempted to conquer this beast by identifying “good science,” then considering outlying cases “bad.”  Here’s what they considered “the good”:

“’Good science’ is usually described as dependent upon qualities such as falsifiable hypotheses, replication, verification, peer-review and publication, general acceptance, consensus, communalism, universalism, organized skepticism, neutrality, experiment/empiricism, objectivity, dispassionate observation, naturalistic explanation, and use of the scientific method.”

Does this list really mean that everything else is considered “junk”?  I can think of a few brilliant studies that used trial and error methods in lieu of the scientific method.  Conversely, I’m aware of peer-reviewers who simply check the “publish” box without actually reading the manuscript.  As is argued on several other blogs, identifying “junk science” is a very gray area.

Perhaps one way to define junk science is to take the Jacobellis v. Ohio approach.  In a 1964 US Supreme Court case involving obscenity, Justice Stewart Potter wrote in his opinion, “I shall not today attempt to define the kinds of material I understand to be [pornography]…but I know it when I see it.”  Clearly the same frame of thought can be applied to junk science.  I am less inclined to accept the Jacobellis approach because it offers nothing tangble.

There must be some empirical qualities that set the good from the bad.  Despite all the skills I’ve learned with a mere decade of lab experience, I am disheartened to admit that I honestly never perfected the skill of detecting bad science.  So, like a responsible, up-and-coming assistant professor of chemistry, I went crawling through the literature to determine what separates the good from the bad.  Below is a list of a few things I learned.

In the spirit of Jeff Foxworthy, science might be “junk” if…

Researchers are more concerned with holding press conferences than publishing results in reputable, peer-reviewed journals. One might assume that “breakthroughs” ought to be showcased in the most prestigious journals after being subjected to a rigorous peer review process.  Fast tracking all the way to the press conference phase certainly raises some flags about credibility.  I’ve seen this phenomenon happen first-hand, and when the science is questionable, the ensuing public announcement can get really ugly (and entertaining, for that matter).

Something about the research seems off kilter. If you think something doesn’t feel right, you might be correct.  Although going with your gut will only get you so far, analysis guides such as “Tipsheet: For Reporting on Drugs, Devices and Medical Technologies” help identify specific areas for journalists to consider when examining the veracity of medical therapies.  Cook and co-workers suggested that similar checklists might likewise serve the general scientific community when evaluating the credibility of reported work.

Conflicts of interest are not explicitly disclosed. In these cases, scientific integrity might be compromised for financial, political, or other external motivations.  In developing this article, I encountered journals, funding agencies, and governing bodies that require authors to declare any potential conflicts of interest while publishing or applying for grants.  Although editors and referees try to uphold strict transparency policies, authors can still fail to report external influences and biasing.  These cases essentially touch every facet of research–cancer, testing pesticides (Berkley Scientif. J. 2009, 13, 32-34), and even drug development.  The onus is put on the audience to look into the author’s sources of funding.

The flow of logic doesn’t make any sense. Junk science may have gaping holes in experimental descriptions or proposed models.  Fortunately, overly simplistic and inaccurate scientific explanations usually evoke sharp criticism from the scientific experts.  Credible “debunkers” often attack the logic of an issue by (for example) discrediting cited authoritative opinions, identifying assumptions, and/or offering overlooked hypotheses.

Colleagues in the field are widely skeptical of the work. Mix it up with your cohorts.  A simple, “Hey, what did you think about the most recent (insert name of researcher here) article in JOC,” can shed some light on the context of published or presented findings.  “[He] hasn’t published anything reproducible in the past 20 years,” my PI once said.  “I sincerely doubt that this latest paper is anything new.”

Science as Art

Princeton University’s Art of Science contest has produced a gallery of pretty spectacular images of science in action.

This is the fourth Art of Science competition hosted by Princeton University. The 2010 competition drew more than 115 submissions from 20 departments. The exhibit includes work by undergraduates, faculty, research staff, graduate students, and alumni.

The 45 works chosen for the 2010 Art of Science exhibition represent this year’s theme of “energy” which we interpret in the broadest sense. These extraordinary images are not art for art’s sake. Rather, they were produced during the course of scientific research. Entries were chosen for their aesthetic excellence as well as scientific or technical interest.

Interestingly, first second and third prize were determined according to the golden ratio, with first prize earning $250, second prize earning $154.51, and third prize earning $95.49.

Be sure to check out all the images, many of them are quite striking.  Clicking on the images gives a caption explaining what you’re looking at.