|
Home > Data
Modeling > Introducing
Distribution |
|
|
Introducing
Distribution |
|
Design
Principles |
|
The
instructional
sequence
was
constructed
with
several
content
and
pedagogical
principles
in mind.
- Concepts
of distribution
as components
of data
modeling
Concepts
of distribution
are introduced
as components
of data
modeling .
Data modeling
(Lehrer & Romberg,
1996)
focuses on
concepts
of statistics
and statistical
reasoning
as tools
for the invention
and revision
of models.
Data modeling
always begins
with a question
and culminates
in an
inference,
which
ideally leads
to generating
a new, more
refined
question.
The process
of data modeling
includes
constructing
data (by
partitioning
objects or
events
into attributes
and their
measures),
and structuring
and displaying
data in ways
that
facilitate
inference.
- Emergent
qualities
of collective,
embodied
activity
Distribution
and related
statistics
are first
encountered
as emergent
qualities of
collective,
embodied
activity.
For example,
in the
first
lessons,
distribution
emerges
as a
descriptor
of the
structure
of the
measurements
made
by different
measurers.
Students
often
expect
that
the measures
were "random" and
(in their
view)
unstructured.
Qualities
of distribution
are related
to activities
of measurers.
For example,
an individual
might
over-
or under-estimate
a length,
but the
collective
consequence
is a
symmetry
(the
distribution
is normal).
- Inventing
measures
and
representations
Statistical
concepts are
developed
by inventing
measures and
representations,
and by comparing
them in a
way that allows
students to
evaluate their
trade-offs
( meta-measure
and meta-representational
competencies ).
Students
invent or
appropriate
displays
that meet
some representational
purpose and
then engage
in an analysis
of what each
representational
system makes
more visible
and what
it makes less
visible.
For example,
varying the
widths of
intervals
and then
examining
the resulting "shape" of
the data
introduce
density
of distribution.
- Teaching
to big
ideas
We
have
generated
a smaller
set
of "big
ideas" about
the
nature
of
data
modeling
and
of
distribution
as
guides
for
teaching
and
assessment
(See
Progress
Map).
These
big
ideas
orient
instruction
across
the
lesson
sequence,
and
they
guide
what
we
consider
worth
assessing.
These
include:
|
|
Big
Ideas |
|
- Question
posing.
Data
modeling addresses
questions because
data are constructed
in response to
question.
- Attributes/
Qualities
Some
quality of the
world is measured.
This entails re-describing
objects and events
as bundles of attributes.
For instance, a
clump of dirt has
qualities of weight,
volume, density,
relative moisture,
texture, and color.
- Measure
Measure
quantifies quality.
Understanding of
qualities is enhanced
by developing their
measures. Statistics can
be viewed as measures
of qualities of
distribution.
- Data-Structure
and Representation
Data
are constructed,
not given. Measures
are organized or
structured in a
way that supports
answering questions
about the world.
Different structures
(e.g., lists, tables)
afford ready access
to different qualities
of the data. Data
displays make aspects
of structure visible,
always at a risk
of concealing other
attributes. Structures
and representations
are best considered
in light of alternatives.
- Inference
Inference
rests on sampling
distribution-what
would happen
if a process were
repeated? What
are the implications
of this repetition
for contrasts
between distributions?
|
|
Overview
of the Measurement
Strand |
|
(1)
Distribution emerges
from repeated
measures.
(a)
Students measure.
Measures vary.
(b)
Students invent
ways of representing
variation and
what they expect
about the true
measure.
(c)
Students develop
indicators of
expected value
and spread.
(d)
Students test
their indicators
in light of measurement
tools/methods
that affect variability.
(2)
Sampling distribution
emerges from
N repetitions
of repeated measures.
(a)
Students
consider what
might happen
if we "did
it again."
(b)
We
do it again
or show another
larger group
of measurements.
We display
these
measures and
see what seems
to be the
same,
different?
(c)
We
introduce
sampling
as a model
of
repeated
process
from this
larger
group of
measurements.
We conduct
experiments
with fixed
sample
sizes and
look
across samples-we
see what
happens
to our indicators.
Do they
vary?
If so, how?
(3)
Repeated measures
of the heights
attained by rockets,
again with multiple
measurers. These
constitute a
reference distribution.
We ask what might
happen if we
kept on launching
rockets, again
and again. We
attempt to establish
some expectation
about variation
and height.
(4)
Employing (3)
as a reference
distribution,
we ask students
about their
expectations
of using rockets
with pointed
nose cones.
(Students often
think these "cut" through
the air).
We again conduct
repeated measures
and trials
(the trial
is a source
of natural
variation in
this context),
lump the measurements
together,
and compare
to the reference
distribution-asking
about the
basis
of informal
inference.
|
|
|
|
|
Last Updated:
June 13, 2005
All Rights reserved. |
|
|