Metrics, biases, and how we learn better.
Based on the editorial in Interface by V. Ramani,
it was insightful to explore the value of publications,
rated by impact factors IF. While not being a fan of this
metric since many articles can only be viewed by those
with access to subscriptions, it was clear that Julien
Mayor’s study of this paradigm measure is tilted by its
use of a poor central tendency (mean with a heavily
skewed distribution) and using only recent publications
(and different journals survey different timeframes.).
[academic outcomes of funding hiring and tenure can
be influenced by such measures.]
When an author publishes, she might ask who does she
wish to share her new found results and discussion with
and how can he make it accessible to them.
A recent measure revealed in Interface is Altmetrics
which also looks at data and knowledge bases, article views
and downloads and views in other media.
NIH reviewed its decision outcomes for funding
grant proposals and shared it supported 18.8% of RO1
proposals. Trying to be objective, it used an algorithm
developed by E. Day that identified a small but significant
bias . The results indicate that nonpreferred applicants
need to submit higher quality proposals to get funded.
Fingers are not pointed at specific subsets however when
such a small deviation can lead to significant outcomes
it will be interesting to see where NIH will find ways to
improve this process in budget cutting times.
Controversies in teaching and learning strategies are
not new. Yet I liked trying Brown, Roediger and McDaniel’s
“Make it stick: The Science of Successful Learning,
Bellknap, Cambridge 2014.” which emphasizes that active
engagement leads to deeper learning.
- active use in the learning phase: simulations, problem
solving before specific training to solve
- spaced learning, requiring retrieval and relearning
- reflection on classes and practical exercises
- interrupting the forgetting process