Cannabinoids for Cannabis Use Disorder?

0
4

The
research
is
in.
Scientists
have
discovered
a
cure
for

cannabis

addiction

and
it
turns
out
to
be
cannabis!

That
was
the
gist
of
a
headline-generating
paper
published
in


JAMA

Internal
Medicine
,
a
Journal
of
the
American
Medical
Association,
which
wasn’t
trying
to
be
satirical.

The
July
2019
report,
titled
“Nabiximols
for
the
Treatment
of
Cannabis
Dependence:
A
Randomized
Clinical
Trial,”
described
an
Australian
study
that
probed
the
use
of
a
standardized
cannabis

extract

for
treating
cannabis
dependence.1
The
extract,
called
nabiximols
(and
marketed
under
the
brand
name
Sativex),
is
an
ethanol-based
sublingual
spray
containing
roughly
equal
parts

THC

and

CBD
,
which
has
been
approved
for
treating
multiple
sclerosis
in
many
countries
around
the
world.


The
researchers
hoped
to
show
that
this
particular
pharmaceutical
formulation
of
cannabis
would
stop
participants
from
smoking
marijuana.

Of
course,
the
research
would
also
need
to
show
that
nabiximols
actually
improves
the
quality
of
life
of
people
trying
to
wean
themselves
off
weed

rather
than
just
changing
their
source
of

THC
.

Replacement
Therapies

The
practice
of
treating
drug
addiction
with
other
drugs
is
nothing
new.
Nicotine
replacement
therapy
(NRT)

utilizing
a
patch,
gum,
lozenge,
etc.

can
help
to
ease
cravings
for
tobacco
without
inhaling
all
the
toxic
smoke.

NRT

isn’t
perfect,
but
it
can
be
a
useful
harm
reduction
technique.

Not
all
formulations
are
equal,
even
those
comprised
of
the
same
drug.

E-cigarettes
are
not
considered
a
valid
treatment
for
tobacco
smoking

in
the
United
States,
thanks
to
a
handful
of
studies
suggesting
E-cigarettes
are
considerably
less
effective
than
other

NRT
s
and
can
act
as
an
entry
point
for
teens
to
start
using
tobacco.

Opioid
replacement
therapy
is
another
common
practice
in
clinics.
Methadone
and
buprenorphine
are
both
highly
addictive,
but
getting
a
heroin
addict
onto
a
legal
supply
of
consistent,
unadulterated
opioids
can
save
lives.2
Replacement
therapies
are
a
double-edged
sword,
however.
After
all,
heroin
was
considered
a
treatment
for
morphine
addiction
in
the
late
1800s.

Eager
to
find
a
replacement
therapy
to
facilitate
cannabis
cessation,
the
National
Institute
on
Drug
Abuse
and
other
institutions
have
been
funding
research
into
medication-assisted
treatment
for
marijuana
dependence.
Thus
far,
no
meds
have
been
approved
for
this
purpose,
but
not
for
lack
of
trying.3

A
Red
Flag

At
first
glance,
the
2019


JAMA

article
may
seem
like
a
regular
study
seeking
to
test
a
potential
substitute
medication
for
marijuana.
As
a
randomized
clinical
trial,
the
Australian
study
was
pre-registered
with
the
hypothesis
and
methods
laid
out
before
data
collection
began.4

In
their
trial,
the
Australian
researchers
listed
three

primary
aims

and
a
handful
of
secondary
aims
of
the
clinical
trial,
along
with
the
statistical
methods
that
would
be
used
to
analyze
the
data.
The
primary
goals
were
to
compare
nabiximols
to
placebo
in
three
ways:5

  1. Would
    nabiximols
    improve
    abstinence
    from
    cannabis
    use,
    compared
    to
    placebo?
    Reductions
    in
    the
    frequency
    of
    use?
  2. Would
    nabiximols
    affect
    treatment
    retention?
  3. What
    are
    the
    side
    effects
    of
    nabiximols,
    compared
    to
    placebo?

They
described
the
first
question
as
follows:
“Unsanctioned
cannabis
use
will
be
quantified
as
4-weekly
point
prevalence
abstinence
during
the
12
week
maintenance
phase
by
combining
self-report
data
from
researcher
interviews

with
objective
measures
of
unsanctioned
cannabis
use
(weekly

UDS

[urine
drug
screen] with
quantitative
analysis
of
urinary

THC
,

CBD

and
their
metabolites).
Unsanctioned
cannabis
use
will
also
be
reported
as
mean
days
used,
and
percentage
of
positive
urine
drug
screens.”

This
registered
aim
should
have
immediately
raised
a
red
flag,
given
that
nabiximols
(Sativex)
is
a
formulation
of
cannabis.
The

THC

and

CBD

in
nabiximols
is
not
chemically
different
from
the

THC

and

CBD

in
“unsanctioned
cannabis.”
So,
all
clinical
trial
participants
who
receive
nabiximols
would
test
positive
for
marijuana,
according
to
the
usual
“objective
measures.”
Practically
speaking,
this
means
that
assessing
whether
the
first
goal
had
been
achieved
would
be
limited
to
self-reporting,
rather
than
a
urine
analysis.

The
registration
for
the
study
went
up
in
early
2016,
followed
by
over
a
year
of
patient
recruitment.
Slightly
less
than
half
of
all
recruited
patients
completed
the
full
12-week
abstinence
program.
Afterwards,
the
researchers
analyzed
the
data
and
wrote
up
the
results,
which
were
accepted
by


JAMA

in
April
2019.

But,
come
publication,
the
study
referred
only
to
a
single
primary
outcome,
rather
than
three.

What
happened
to
the
two
other
outcomes
that
were
promised
when
the
study
was
registered?

Moving
the
Research
Goalposts

This
discrepancy

between
the
three
registered
goals
and
the
single
published
outcome

was
first
pointed
out
by
Stanford
doctors
Robert
Kleinman
and
Michael
Ostacher
in
a
letter
to


JAMA

Internal
Medicine
.6

In
“Nabiximols
for
the
Treatment
of
Cannabis
Dependence,”
Lintzeris
et
al
repeatedly
describe
the
primary
objective
of
their
research:
“The
primary
hypothesis
for
the
study
is
that
a
12-week
treatment
program
with
nabiximols
will
result
in
significantly
less
illicit
cannabis
use…”

According
to
the
Australian
authors,
“The
primary
end
point
was
self-reported
total
days
of
illicit
cannabis
use
during
weeks
1
to
12…”


They
even
go
so
far
as
to
disavow
their
registered
plan
to
use
abstinence
as
a
primary
outcome
:
“Previous
studies
of
treatment
for
cannabis
dependence
have
reported
on
abstinence
rates,
and
this
outcome
(while
not
the
primary
end
point
in
this
study
)
was
used
to
estimate
a
sample
size
of…”
[emphasis
added].

What
was
lost
between
the
study’s
design
and
the
data
analysis?
What
primary
goals
were
dropped
or
reframed
as
less
important,
and
why?

For
one,
the
discussion
of
adverse
events
was
minimized.

There
were
no
statistical
differences
between
the
adverse
event
profiles
of
nabiximols
and
the
placebo,

though
the
reader
would
need
to
download
tables
of
supplementary
data
to
see
the
specifics,
a
handful
of
which
are
jammed
into
a
single
paragraph
describing
the
side
effects
of
the
treatment.
Nabiximols
didn’t
appear
to
cause
more
side
effects
than
taking
a
placebo
spray,
but
there
were
not
enough
participants
to
analyze
specific
problems.


One
interpretation
of
this
result
is
that
nabiximols
treatment

on
average

neither
produced
nor
reduced

cannabinoid
-related
side
effects.

The
other
primary
question
that
got
swept
under
the
rug
pertained
to
treatment
retention.
Poor
retention
often
foreshadows
a
failed
pharmaceutical.
If
the
study
participants
are
giving
up
on
the
treatment,
then
it’s
unlikely
to
succeed
as
a
medicine.
In
both
the
placebo
and
Sativex
groups,
slightly
more
than
half
of
the
participants
dropped
out.
This
dropout
rate
is
fairly
typical
for
studies
on
cannabis
dependence.
Only
three
sentences
in
the


JAMA

article
address
what
was
supposed
to
be
one
of
the
key
outcomes
of
the
study;
the
authors
mention
in
passing
that
the
treatment
didn’t
significantly
affect
retention.

In
other
words,

what
turned
out
to
be
two
non-significant
outcomes
and
a
one
positive
self-reported
outcome
was
presented
in


JAMA

as
a
single
positive
result
.
And
even
this
lone
positive
result
appeared
doctored
from
its
original
intent,
with
abstinence
no
longer
recognized
as
the
aim
of
their
treatment
for
cannabis
addiction.
It’s
hard
not
to
suspect
that
the
authors
were
spinning
their
report
to
inflate
the
significance
of
the
paltry
results.

Hedging
their
Bet

It’s
worth
asking
why
this
matters
in
the
first
place.
It
may
seem
a
bit
dodgy
to
change
the
hypothesis
of
an
experiment
after
the
fact,
but
that
doesn’t
necessarily
mean
the
data
isn’t
valid.
Nor
does
it
mean
that
their
conclusions
can’t
be
right.
Isn’t
science
supposed
to
be
objective,
regardless
of
a
researcher’s
intention?

Unfortunately,
the
problem
is
a
lot
bigger
than
just
data.
Fiddling
with
a
hypothesis
after
the
research
has
been
conducted
fundamentally
compromises
the
scientific
process
itself.
The
scientific
method
uses
experiments
to
test
ideas
and
better
understand
the
patterns
that
emerge
in
nature.
But
that’s
not
the
same
as
just
pointing
out
what
patterns
happen
to
exist
in
a
given
data
set.
Hindsight
is
20/20,
and
it’s
easy
to
find
trends
after
the
fact.

Post-hoc
hypothesizing
is
so
tempting
and
so
common
among
researchers
that
it’s
gotten
a
name
for
itself:
HARKing”,
or
Hypothesizing
After
the
Results
are
Known.7This
retrospective
thinking
lets
people
reframe
their
hypothesis
to
appear
correct,
rather
than
actually
testing
a
model
scientifically.

Oftentimes,

HARK
ing
allows
researchers
to
disproportionately
emphasize
parts
of
the
experiment
that
look
“good”
by
some
standard.
That
tends
to
mean
flashy,
positive
results
that
draw
more
excitement,
funding,
and
attention
to
their
research.
Positive
papers
are
much
more
appealing
than
ones
finding
no
association
or
replicating
a
result
that
had
already
been
discovered.
It’s
one
of
the
many
features
of
publication
bias,
which
tends
to
promote
positive
results
while
suppressing
unwanted
or
nonsignificant
findings.


Some
30-50%
of
scientists
admit
to
engaging
in
these
post-hoc
hypothesis
adjustments
,
according
to
surveys.8

A
Lame
Reply

Kleinman
and
Ostacher’s
letter
goes
on
to
mention
other
questionable
statistical
maneuvers
in
the


JAMA

study
on
cannabis
addiction,
such
as
the
apparent
failure
to
correct
for
multiple
comparisons
in
the
Australian
clinical
trial.
Often
called
p-hacking,9
multiple
comparisons
is
the
practice
of
testing
extra
associations
to
find
any
positive
result

it
greatly
inflates
the
likelihood
of
false
positive
results.



JAMA

Internal
Medicine

published
Kleinman’s
critique
alongside
another
letter
about
the
study,
as
well
as
a
response
from
Lintzeris
and
two
other
co-authors.1011
Editors
typically
give
authors
the
chance
to
respond
to
criticism
of
their
article.
But
Lintzeris’s
response
does
not
even
acknowledge
the
removal
of
two
of
the
study’s
three
primary
end
points,
which
undermines
their
credibility.

Instead,
while
neglecting
these
serious
concerns,
the
reply
focuses
on
doubts
about
the
single
primary
outcome
that
the
authors
had
presented.

In
addition
to
the
other
criticisms
raised,
the
Stanford
doctors
took
issue
with
how
the
first
primary
outcome
was
initially
described.

The
final
publication
focused
on
how
treatment
affected
the
frequency
of
cannabis
use


not
on
the
likelihood
of
abstinence
from
cannabis.

That’s
a
key
difference.

According
to
the
registered
protocol,
both
metrics
should
have
been
incorporated,
but
the
publication
only
described
the
successful
outcome
(an
average
decrease
in
the
number
of
days
cannabis
was
used)
as
a
primary
result.
Abstinence
was
demoted
to
a
secondary
measure.
And
the
fact
that
nabiximol
treatment
didn’t
increase
abstinence
from
cannabis
was
buried
later
in
the
article.

In
their
rebuttal,
Lintzeris
defended
reframing
their
single
primary
outcome,
while
still
ignoring
the
fact
that
they
had
pre-registered
three
primary
outcomes,
not
one.
They
suggested
that
the
emphasis
was
unimportant
because
the
results
were
presented
elsewhere
in
the
paper,
saying,
“Both
self-reported
days
used
and
the
proportion
of
patients
self-reporting
abstinence
at
4-week
research
interviews
are
transparently
reported
in
the
article,
enabling
readers
to
make
their
own
conclusions.
We
make
no
claims
that
nabiximols
is
effective
in
achieving
abstinence
at
a
greater
rate
than
placebo.”

The
second
half
of
the
letter
then
continues
to
defend
the
measurement
of
reduction
in
use
rather
than
full
abstinence
in
response
to
another
letter,
leaving
many
issues
unaddressed.

Fudging
the
Facts

Randomized
clinical
trials
are
held
up
as
an
ideal
form
of
scientific
evidence.
But
not
every
report
is
equally
credible.
An
honest
case
study
is
more
valuable
than
a
misrepresented
clinical
trial.

Many
experiments
bearing
negative
or
disappointing
results
never
see
the
light
of
day,
which
may
be
why
nearly
a
third
of

US
-based
clinical
trials
don’t
get
reported.

Positive
results
aren’t
always
what
they
appear
to
be,
either.
In
another
2019
study,
for
example,
scientists
at
Harvard
and
Yale
described
their
efforts
to
treat
cannabis
addiction
with
galantamine,
a
drug
for
Alzheimer’s
dementia.
After
finding
the
treatment
had
no
effect
on
cannabis
addiction,
they
somehow
assert
that
their
data
“support
the
feasibility
of
the
administration
of
galantamine
for
individuals
with

CUD

[cannabis
use
disorder].”12

In
the
nabiximols
study,
it
appears
that
the
Australian
researchers
conducted
a
trial
with
mediocre
results.
But
instead
of
presenting
the
experiment
as
it
was
designed,
the
researchers
chose
to
misleadingly
interpret
their
data
to
make
the
results
look
better.

Even
the
gold
standard
of
medical
research
can
be
tarnished
in
the
wrong
hands.



Adrian
Devitt-Lee,
Project

CBD
’s
chief
science
writer,
is
pursuing
a
PhD
in
math
at
the
University
College
of
London.



Copyright,
Project

CBD
.
May
not
be
reprinted
without

permission
.


Footnotes

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

two × 1 =