Psychological Rationale and Considerations: mySelfQuantifier

| Main | Rationale | Requirements | Workbook | Help | Screencast | Legal |

Introduction

This document describes psychological considerations for mySelfQuantifier, i.e., facts about the human mind in response to which mySelfQuantifier must be designed.

Cognitive Productivity Systems Need to Include IT Guidance, and be Informed by Broad Cognitive Science

Cognitive productivity systems need to include IT guidance, and they ought to be informed by broad cognitive science. “Broad” cognitive science includes not only cognitive psychology, but the psychology of emotions and moods. It also includes other disciplines, such as AI, linguistics, philosophy and neuroscience. Organizational psychology and behavioral economics are also relevant.

Why do I say this? Because ultimately, productivity systems are about how to use the human mind. Cognitive science is the science of mind. If you look at books like Getting Things Done, for instance, you will find many claims about the human mind. But are they valid? Are they based on science?

Most productivity systems are not based on a broad, detailed analysis of relevant cognitive science.

For example, the first edition of the highly influential book, Getting Things Done, does not reference scientific literature or include a bibliography. The main examples on which it is based (e.g., cleaning one’s garage, sorting paper) do not involve expansive knowledge work per se, let alone the use of advanced information technology. It is not a cognitive productivity system per se, yet many knowledge workers swear by it. (Incidentally, knowledge workers ought not swear by any productivity system — cultishness to the bin. Search for Getting Things Done in Cognitive Productivity for more details.)

Deep Work, written by an academic, Cal Newport, does cite psychological research, but it’s fairly lightweight stuff (e.g., about the concept of “flow”, which is criticized in Cognitive Productivity). Deep Work is also quite negative about information technology. By this I mean that in Deep Work, Newport writes mostly about what not to do with information technology.

Daniel Levitin is a rare psychologist: one who has delved into productivity and written a popular book about it: The Organized Mind. The book, however, has very little by way of practical recommendations for the core of knowledge work. Like Deep Work, it recommends paper and pencil (…). (I’m not saying that paper and pencil are useless, but the ratio of paper to electronic recommendations should be very low today.) To my astonishment, when it came to giving practical advice, Levitin (again: a psychologist) relied on Getting Things Done, a book that was written without reference to empirical science, and which has been criticized by some as being cultish. Like the Getting Things Done and Deep Work, there is very little about using information technology. (Its core examples concern cleaning one’s garage and filing paper mail.) Knowledge workers have published ways of using information technology. There are apps like OmniFocus for this purpose. Incidentally, I myself use OmniFocus. But I don’t stick to David Allen’s recommendations, nor is there, to my knowledge, empirical support for the idea that I should.

This is not a criticism of these authors. Authors can only cover so much in one book. And authors must write from the vantage point of their expertise and for a particular audience. It is instead a criticism of the overall literature.

Nor am I claiming that being informed by cognitive science ensures helpfulness. Not only can scientifically grounded treatments miss the mark — the annals of applied science are a testament to this — there is not enough pertinent data and theories are lacking. Still, science has made pertinent progress.

Here at CogZest, our focus is on cognitive productivity which, in modern times, requires using information sources and technology in a manner that accords with our understanding of the human mind. I made this argument at length in Cognitive Productivity. That is why I have included this page on psychological rationale for self-quantification, and a page on self-quantification requirements. For more extensive cognitive productivity requirements, and a review of the pertinent cognitive science literature, see Cognitive Productivity.

This document lists some psychological principles relevant to self-quantification.

The Importance of Reviewing Performance

In order to improve performance, prior performance needs to be reviewed. Review is recommended in several productivity systems. However, these suggestions typically fail to account for psychological facts, such as forgetting, and fail to provide technical guidance on review. (In contrast, you can be sure that professional coaches in major American leagues, and officers in the US Armed forces, leverage technology and psychology in their reviews.) Meaningful performance review requires that

  • performance be directed towards a goal,
  • performance be observed, recorded, and quantified or at least classified,
  • outcomes of performance be assessed / evaluated,
  • feedback be generated by a machine or human,
  • the feedback be understood, valued and assessed by the performer, who must attempt to modify his/her behavior accordingly, assuming the feedback is deemed helpful.

It is one of the major functions of human consciousness to engage in performance review. This is well argued in one of the most important books in cognitive neuroscience, A Mind So Rare: The Evolution of Human Consciousness. In Cognitive Productivity, I described how performance review can work “under the hood”, in the human mind-brain.

There’s plenty of literature on expertise, and the roles of review and feedback are often discussed. Mark Guadagnoli wrote an entertaining, free book on performance improvement: Practice to Learn, Play to Win is germane. His book deals with sports rather than knowledge work, but the same psychological principles are at work.

Self-quantification software needs to support such review. mySelfQuantifier, for instance, enables you to concisely track the goal of an activity, its rationale, your results, and the rationale for the goal and/or activity. One would only do this for select activities, for instance the ones from which one wants to learn. But one needs to be able to do so, and to do it quickly, such that at least some of one’s actions can be tracked. This allows one, also, to make a note of critical failures, such that they can be captured during review. Otherwise, as you will see in the next sections, reviews will likely be faulty.

Incidentally, Stoicism (an ancient philosophy) is on the rise these days, possibly as a result of the rise of Eastern forms of Buddhism that are similar to Stoicism. Stoics advocate an evening meditation which is a review. Without systematic tracking, cognitive science guarantees that key daytime successes and errors of omission and commission will be omitted from daily reviews. Next up is why.

Human Self-Reports are Extremely Error Prone

One of the most important findings in cognitive psychology is that recalling and assessing one’s performance is extremely difficult. Unaided, we all suck at it.

This is partly because we have limited conscious access to our own mental processes. It is also because content from working memory decays very rapidly. Further, we store only an extremely small amount of meta-information (information about our own mental processes) in our long-term memories. And whatever information we store doesn’t tend to be useful for explicit analysis and deliberation.

This does not mean that we ought not to engage in self-monitoring. Nor does it mean that self-reliance for self-information is pointless. It means that we need to use self-quantifying tools that augment our cognition. If we are to rely on our own reports, the reports need to be generated very soon, and in the right way. Without proper timely logging, it is very difficult to recall specifically what one has done. Without specific records, it is difficult to detect and repair performance flaws.

Psychologists have devised techniques to get around some of these limitations, such as think aloud protocols. (See Pressley & Afflerbach, 1995).

I contend that if you want to improve your performance, it helps to record your performance soon after its occurrence. To do this efficiently itself requires practice. (Yes, this is recursive.) mySelfQuantifier support such recording. It might enable you to maintain a cadence of performing and logging.

Suggested Readings:

  • Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259.
  • Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading. London, UK: Routledge.
  • Sloman, A. (1978). The computer revolution in philosophy: Philosophy, science and models of mind. New York, NY: Harvester Press. Retrieved from
    http://www.cs.bham.ac.uk. See in particular, chapter 6.
  • Wilson, T. D. (2004). Strangers to Ourselves. Harvard University Press.

People Might Overestimate How Much They Work

An implication of the research on which the prior section is based is that knowledge workers overestimate how much they work. And people even more drastically overestimate the amount of house work they do. It is not a coincidence that housework is one of the major causes of marital discord.

You have probably heard of people who systematically work extraordinary numbers of hours per week. Here’s a particularly extreme headline: “Yahoo CEO Marissa Mayer explains how she worked 130 hours a week and why it matters” (Aug 4, 2016).

I know next to nothing about Ms. Mayer, so this is not a comment on her. However, when you hear someone say they work such long hours, you’ve got to wonder. First, what does it mean “to work”? Secondly, how do they know they work so much? Might they be wrong?

It’s not just that studies have shown that people who are sleep deprived zone out without knowing it (compare Coren, 1999). Without an adequate self-quantification system, it is very difficult for knowledge workers who juggle multiple projects to precisely estimate how much they work. Here again time-tracking apps, as useful as they are, are normally insufficient. Unless you very extensively configure such apps, (which is time consuming, and hard to validate), you can easily categorize as work activities that are not work.

A tool like mySelfQuantifier allows users to precisely and accurately indicate whether time spent actually qualifies as work. It also allows one to indicate on what project work was performed, and to indicate other attributes of time.

My proposal that knowledge workers over-estimate the amount of work they do is partly based on Robinson et al (2011). However, the debate between Franzis and Robinson on this issue is not yet resolved (see the readings below), with Franciz (2014) sharply criticizing the internal consistency of Robinson’s evidence. Nevertheless, I feel that the type of data to which both researchers appeal (diary data of a variety of workers) is inadequate or at least very insufficient. It is important to narrow the scope to knowledge workers (to whom the concepts of cognitive productivity and deep work apply most clearly, and where the potential for off task behavior is high) and to use fine-grained computer-based logging of actual behavior. Meanwhile, given extensive experimental research in many areas of psychology, it would be naive to assume that knowledge workers can precisely report their productivity.

A tool like mySelfQuantifier is required to accurately resolve these issues. See mySelfQuantifier as a Tool for Scientists, below.

Readings

  • Coren, S. (1996). Sleep Thieves. Simon and Schuster. (Shows that sleep-deprived people are not as productive as they think are.)
  • Frazis, H. (2014). Is the Workweek Really Overestimated. Monthly Labor Review 137.
  • Robinson, J. P., Martin, S., Glorieux, I., & Minnen, J. (2011). The overestimated workweek revisited. Monthly Labor Review Online, 43–53.

Psychological Measurement Itself is Difficult

It is not just self-knowledge that is error prone. Our ability to measure the behavior of others is also quite limited. There is an entire field of psychology dedicated to measurement issues. It is called psychometrics. It deals with issues of validity and reliability of psychological instruments. (Concepts themselves are instruments!) However, psychometrics is not an isolated psychological discipline. Every psychologist must be reasonably well versed in psychometrics. And I mean every psychologist: whether they are researchers, clinical psychologists, consultants or other. For that matter, every serious student of behavior, everyone who designs, or interprets results of instruments that measure behavior (e.g., teachers and designers of self-quantification tools) ought to take several university level courses in psychometrics and research methods.

So, if you are reading a book about productivity that is not written by someone who is well versed in psychometrics, you need to be particularly aware of the possibility that there are gaping measurement problems. More often than not, you will find that measurement is not even explicitly addressed as a problem. But being an expert is no guarantee of success. Research psychologists, like other scientists, spend a good chunk of their writing-time criticizing each others’ — and hopefully their own — measurement concepts and measures. For instance, I have used the expression cognitive productivity and meta-effectiveness many times. Yet there is no instrument to measure either construct, let alone a validated one. Tsk! Tsk! Dr. Beaudoin! Similarly, Cal Newport proposes the concept of Deep Work. It turns out that it is quite difficult to operationalize either concept.

However, knowledge workers can’t wait for all these questions to be resolved. Moreover, productivity itself can be broken down into several, more manageable components. So, knowledge workers need not necessarily be concerned with their overall cognitive productivity, but can target specific performances. A self-quantification system needs to be flexible enough to let the user define and experiment a large variety of measures.

Suggested Readings:

  • Sloman, A. Conceptual Analysis
  • Anastasi, A. (1988). Psychological Testing. New York: Macmillan.
  • Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Upper Saddle River, NJ: Pearson Education.

Observing Performance Tends to Improve It

One of the oldest and yet most important findings in Psychology is that people’s behavior improves when it is observed by others. There are all kinds of evolutionary reasons for such social facilitation. There are, of course, also cases where observation hinders performance (cf. practicing a complex skill in front of a judgmental observer). But generally some degree of monitoring is good for skilled performance. Top performers can handle the pressure of observation, and use it to good effect.

Yet most knowledge work is done in private.

Fortunately, however, social facilitation does not require that the observer be someone else. Placing a mirror in front of an ADHD child who is supposed to do homework can improve her performance. I learned recently that some teenage girls do their homework remotely from each but together via an audio-video IP link (FaceTime). They have a social contract to stay on task. This is anecdotal data. (See Stanovich, 2010 above). However, it illustrates, and can be explained by, social facilitation.

I am not suggesting that you place a mirror and FaceTime camera in front of yourself at work. Instead, I am suggesting that fine-grained recording of one’s own activities can engage some of the same key mental processes that are at work in social facilitation, thereby improving cognitive productivity.

Based on this reasoning, using mySelfQuantifier might leverage social facilitation processes.

Measuring Performance Tends to Improve It

Implicit in the prior section is the concept of measurement. I don’t think I need to adduce many data or references to convince you that simply measuring performance is a great way to improve it, whether or not the effect is mediated by social facilitation.

Of course, one needs to use the right metrics, namely leading metrics. Referring to Ben Yoskovitz, the co-author of Lean Analytics, Dharmesh Shah writes:

A good metric is:
Comparative
Understandable
A ratio or rate
Behavior changing

And it is not a vanity metric.

Recommended Readings

Desirable Difficulties

Given that desktop app tracking software, like Timing.app can automatically provide you with copious information about your desktop project activities, why should you use self-quantification system, like mySelfQuantifier, that requires data entry?

After all, on the Timing app website, Alessandro Vendruscolo, says the following:

You could use a manual time tracker. But to be honest, manual time tracking sucks. You have to start and stop timers and enter what you did. And if you forget that, you are back to square one.

Not so with Timing. Instead of making you do all the work, Timing automatically tracks how you spend your time. It logs which apps you use, which websites you visit, and which documents you edit. And if you are a freelancer, you can export that data to create invoices.

For this section, lets set aside the fact that not all of your activity is on the desktop, and that iOS (I can’t speak for Android on this) does not support app time tracking.

One of the answers involves one of the most important concepts in applied psychology: desirable difficulties. The concept is analogous to the “no pain, no gain” principle. To be sure, not every difficulty is desirable; nor is ease intrinsically undesirable: productive laziness is a good thing! Let’s take a well researched example from education. Re-reading a document is much easier than answering questions about it. But answering questions improves comprehension and recall more than re-reading. The concept is discussed in Cognitive Productivity.

At a computer, it is very easy — often too easy — to switch from one activity or project to another. This facility makes us highly susceptible to distraction.

Discretely noting that you have switched tasks interrupts this switching. Recording the switch is a small penalty to pay if it helps you stay on track. It is not a big penalty, because once you get into the habit of it, it only takes a few seconds to record a switch. But that time is spent doing exactly what you need to do to be productive. That is to ask yourself: What’s my goal? Is it more important to pursue this new goal than to return to the prior activity/project, or to the other goals on my list? Answering those questions is a desirable difficulty if it helps keep you on track.

Even if you don’t bother to analyze your time data much, simply getting into the groove of recording task switches might help you stay on track.

This is also part of a more general principle: don’t outsource too much of your thinking. (See also the SharpBrains web site on that.)

The Importance of Knowing Your Way Around Your Projects and Activities

One of the major challenges of being knowledge workers is the number, fluidity, and complexity of the projects we deal with. Moreover, projects come and go quickly. Projects spawn sub-projects. As a result a major area of competence is to be able to navigate the virtual space of one’s projects, and know how they relate to the projects of one’s team, organization and clients.

Another major challenge of being a knowledge worker is that we engage in very fluid, largely cognitive, activities. As a result, we easily and frequently switch not only from one project to another, but from one type of mental activity to another. It is very important to note that the number of activities that we engage in is actually much less than our projects. But activities are of a different, and higher, order of complexity and opacity than projects. To use the extension to Popper’s concepts that I introduced in part 2 of Cognitive Productivity, mental activities are largely tacit, subjective, “World 2′” (World-2 prime) activities, whereas projects are public, World 3 entities. Knowledge workers rarely talk explicitly about the details of some of their most important activities, such as reading and writing. Yet we know from cognitive science that explicitly and consciously representing mental activities is critical to improving them.

David Perkins, of Harvard University, developed a simple but powerful metaphor for thinking that is germane to this topic. It is the the geographical metaphor. Just as expert cab drivers must know their way around a city, knowledge workers must know their way around the realms of thinking. (In Education and Mind in the Knowledge Age, Carl Bereiter discusses this. Chapter 9 of Cognitive Productivity is called “Learn your way around your R&D”).

Here’s a quote from Cognitive Productivity.

David Perkins proposed a geographical metaphor for understanding intelligence that is germane to this chapter (Perkins, 1995). Reflective intelligence, to Perkins, involves knowing one’s way around, and being able to navigate within, realms of thinking: thinking dispositions (dispositional realms), thinking challenges (challenge realms), techniques for thinking (tool realms), resources that support thinking (technical realms), thinking situations (situational realms) and contexts of thinking (contextual realms). Each of these increasingly specific realms has its action, belief and conceptual systems. If you are merely acquainted with a big city, for example, you probably don’t really know how to get around there. You will need to call upon maps, information technology and people. Without knowing your way around linear algebra (a technical realm), you would struggle to model cognition with neural networks. Part 1 and 2 of this book were designed to help you “learn your way around” your purposes, challenges, dispositions and mind. 

Meta-effectiveness involves knowing one’s way around the realms of one’s research and development.[^290] More specifically:

  • To know one’s way around the different levels of knowledge-processing: surfing, delving and developing. This pertains to important things we do with knowledge resources: we inspect, assess and delve them. By developing and applying knowledge, we develop ourselves.
    • To know one’s way around one’s knowledge resources and the meta-information one generates in processing them. This concerns generating navigating, accessing and utilizing meta-information. Ultimately, our effectiveness hinges not on exploiting entire knowledge resources but the information we construct about them.
  • To know one’s way around one’s R&D projects and activities: capturing, classifying and organizing our knowledge work in projects and tasks. We also need to understand the different types of activities we engage in with knowledge.

mySelfQuantifier is meant to help users learn their way around projects and activities. I expect that people who record their projects and activities in mySelfQuantifier will, on average, better “learn their way around” their own projects and activities. If you use a GTD® app, such as OmniFocus, you probably have fairly good mental map of your Projects. However, you might not have a very good sense of the time you spend in them. And you might not be as familiar with them as you could be by explicitly quantifying them. mySelfQuantifier can be used in conjunction with OmniFocus, though the link is not yet programmatic. ( OmniFocus has an API that makes it possible in principle however to access projects from mySelfQuantifier.)

A weakness of Getting Things Done and OmniFocus is that they have no systematic way of recording activities. As I argued in Cognitive Productivity, this is a problem. There, I noted that the notion of Context is a weakness of the Getting Things Done book. Context may have been quite important in the days when Allen wrote his book. However, today context matters much less to many of us because we can do so much work virtually and asynchronously. I find that a better way to leverage old-style contexts is to have a hierarchical meeting file or node (e.g., in an outliner or GTD app) for each person, and to have “next” node there to list what to do with that person. (This partly has to do with the fact that meeting files can more quickly be accessed via a launcher, such as LaunchBar, than an OmniFocus context. There are other hacks, however.) In Cognitive Productivity, I proposed that the context field of OmniFocus actions should be usurped by activities. I often do that. However, ideally, OmniFocus should automatically detect action type based on the verb in the task name. (I hope OmniGroup improves OmniFocus along these lines. I am confident that data analytics would prove that people hardly use the context field, and would benefit from a redefinition. This is one of many examples of the problems knowledge workers face because Getting Things Done is not integrated with science.)

Having said that, mySelfQuantifier overcomes these limitations by providing a field specifically for activities. You can then later review your activities to see how much time you have spent in various activities, such as

  • surfing sources of information (which, incidentally, is not necessarily shallow work),
  • delving (carefully processing information)
  • practicing productively,
  • meeting,
  • meditating,
  • writing,
  • emailing,
  • etc.

You can also include social media in your activities. For example, here are some of the activities I’ve defined under “net” which to me means communication/social-networking:

/net/FaceBook
/net/share/photos
/net/Instagram
/net/Email/
/net/meet/Skype

Logging these activities may help one keep low-cognitive value activities increase higher cognitive-value ( “deep work”) activities.

With mySelfQuantifier, you can assess time spent per project and per activity. For instance, if you write blog posts, you would define one project per blog post. Then you can see how much time each blog post took you to write.

Because the system uses a spreadsheet, you could easily calculate combinations of projects and activities. Because projects are hierarchical, you can calculate time spent at different levels of the project. E.g., time spent in a chapter, section or entire book. You could also in principle do analyses with parameters (e.g., how much time spent reading a particular book, if that is in a parameter). Not that I’m suggesting you all go nuts logging and analyzing everything. Logging takes a bit of time, though it’s quite fast with this system (with a computer at least), and presumably in the future accessing data from time-tracking apps, and other cognitive productivity apps, will be easier.

Quantifying Time Spent in Deep Work

In Deep Work, Cal Newport discusses projects and activities, though he doesn’t delve into issues of “knowing one’s way around”. Nor does he suggest quantifying projects and activities.

Newport does suggest quantifying time spend in deep work, however. For this, he uses paper and pencil. That is inefficient. It is also error prone (forcing one to measure in large increments, leading to self-report problems). Further, it does not efficiently lend itself to quantitative analysis.

I have found it helpful to have a separate column for “deep work” (I could have used the parameter column instead, but that requires more parsing). I also have pages to quantify deep work. My own time spent in deep work improved when I started doing this. How do I know this, given that I wasn’t measuring deep work before? I can tell that my time spent in shallow “/net” activities (Facebook, etc. have decreased), and other deeper activities has increased (for a sample anyway). This isn’t a controlled experiment. But everyone must place their bets, and with a self-quantification system, one at least has extensive data to boot.

So, with mySelfQuantifier you can easily track the time spent in deep work, per day, week or whatever interval you choose.

You could also automatically see how much deep work you spent per project (or subproject). Given the limits on self-knowledge alluded to above, you might be surprised by the data. (I know I have been.)

Quantifying All Kinds of Activities in an Integrated Fashion

The tendency today is to use specialized apps to monitor and record activities. An app for this, an app for that, and so on. Then our data are splattered across different apps. Apple aims to help us integrate these data and present them in simple ways to us (Health app / Healthkit). This is good, I use it.

However, what if you want to do the integration of data, and do not want to engage in fancy coding? What if you want access to your data (and they are, after all, your data) in a friendly, sensible format so that you can access and manipulate them — and you don’t want to view XML (though it could be a transparent exchange format)? (Why is markdown XML? For a very good reason.)

And what if your needs are rather esoteric? Suppose you don’t want to use yet another app, learn its API, etc. You are quite willing to define your activities your way. In fact, you’d rather define it your way, to have complete control, and not waste your time managing another bloody app.

The mySelfQuantifier approach is to put everything in a spreadsheet. Of course, if there was an app that easily provided you with the data you need, in your format, then ideally, you could automatically import the data provided by a third part into mySelfQuantifier. That day might come. Who knows?

I will give just a few examples here:

Weight

I like to track my weight. One row does it:

  • Time start
  • Time End
  • Text expansion abbreviation: :weight
  • Expansion: /pnl/health /log weight:

Then I enter my weight. I get the data from Siri. When I weight myself, I say “Hey Siri make a new note saying that my weight is … “. At my desk, I enter the value.

To be sure, i’d rather not have to log the data myself. But I want the data in a format I can work with, and in my timeline.

Standing at work

I work standing up, most of the time. How do I know this? Because I track the time I spend sitting, like this:

  • Text expansion abbreviation: tt: gives me the time , which I use for start
  • Text expansion abbreviation: :sit
  • Expansion: /pnl/health /sit

When I shift positions, I “close” the activity (the row). I don’t need or want an app to track my standing.

Incidentally, I switch from sitting to standing in only a few seconds. Here is my setup.

Weightlifting, jogging, cycling, climbing stairs

I similarly like to track my exercise. Like weight, there’s nothing particularly esoteric about logging weightlifting, jogging, cycling or climbing up and down stairs.

I do use iOS for some of this. (Health for instances tells me how many floors I’ve climbed). But frankly, I prefer having the data in my uniform format in my spreadsheet.

Personally, I don’t care how many kilometers I’m cycling. I have a fixed route that I do several times a week. I just care about the start time, end time, duration, and some parameters. In particular, I occasionally have tracked my performance with and without caffeine. Adding this one parameter to someone else’s data format would be a pain. with mySelfQuantifier, it’s just a parameter:

  • Text expansion abbreviation: pcaff:: (I use “::” as a suffix notation for keyword parameters that don’t take an argument. mySelfQuantifier doesn’t impose this: you define the language.)
  • Expansion: postCaffeine::

Your needs / whims will be different from mine. And different from apps. A spreadsheet can accommodate all that.

Aside

Is personal health monitoring a waste of time? To each their own. There seem to be lots of benefits to working standing up. But I do not wish to stand all the time. I’ve been successfully working standing up since Jan 2010. I exercise about 6 days a week, always vigorously. And my weight is where I want it to be. This is not just because I log. But let’s not forget the principles of self-monitoring as social facilitation. This system keeps me honest with myself. There is lots of research that suggests adherence to exercise regimes is difficult. “Make it social” is an oft-mooted principle. This doesn’t make it fully social, but it is psychologically quasi social. There are reasons, some noted above, why professional coaches and athletes quantify their performance.

Also, in January 2015, I used medications that affected my weight and some other important variable. I was able to do A-B, A-B experiments, and compare my weight over long periods of time. The experiment wasn’t completely blind. However, I didn’t know at first that the medication (which I have trashed) affected my weight or these other variables. I was able to tell by thinking, looking at literature, looking at the data, then manipulating the independent variables at my disposal.

In any event, a unified spreadsheet can also be used to track other types of work and personal data.


User Involvement in Cognitive Activity Logging is Beneficial and Often Necessary to Self Quantification

The Quantified Self movement generally aims to have data collected automatically. This is good. However, we believe there are certain activities the manual logging of which can be quite helpful. These include some of the core activities of knowledge work. Our claim (to be tested) is that self-logging promotes awareness of important shifts in cognitive activities.

Moreover, software will not, any time soon, be able accurately and reliably to detect your current intentions (projects and goals) or the results of your actions.

Before and after the iPad was announced in 2010, I called for detailed activity logging in iOS and OS X. Apple has not moved far in this direction, though inter-app communication and iCloud are steps in the right direction. Apple would need to explicitly provide support for high-level user-activity monitoring.

mySelfQuantifier as a Tool for Scientists

Self-quantification systems like mySelfQuantifier are not merely for people to measure and analyse their own behavior. They are also tools to help scientists to study cognition in general and cognitive productivity in particular. In particular, mySelfQuantifier could also be used to collect data and test various theories and systems about cognitive productivity.

For example, without such a system, it would be difficult to fairly measure Cal Newport’s Deep Work system. To be sure, this self-quantification system extends Newport’s system. But it is quite normal in science for measurement to extend theory. (The philosopher of science, Imre Lakatos, was perhaps the first to persuasively document how theories can grow in conjunction with the development of measurement instruments.)

Another pertinent example is Steve Covey’s third principle of personal management, “Put First Things First”™ . (Compare page two of this PDF from Franklin Covey). His time matrix has two dimensions: importance and urgency, and thus four quadrants:

  1. Urgent and important,
  2. important, not-urgent,
  3. not important, urgent,
  4. not important, not urgent.

We are told to try to spend our time in Q2 (important, not urgent).

Wikipedia reports that this book has sold 25 million copies. I consider it an excellent book. However, in order for individuals and scientists to test Covey’s theory empirically, they need a simple, convenient, and accurate self-quantification system.

Without considering the further dimensions of project and activity, as mySelfQuantifier affords, it seems unlikely that individuals can adequately classify their actions with respect to Newport or Covey’s systems. For those systems implicitly refer to projects and activities. Users need not explicitly refer to the project and activity in every fact they record, but the project dimension should at least figure in the classification. Most psychologists, brought up through the ranks of 8-10 years of university studies, have had plenty of experience classifying and assessing behavior with psychology instruments (whether with humans or other animals) —and assessing measurement instruments, for that matter. I assume they would or at least should expect their measurement instruments to be a bit more detailed than Newport and Covey’s.

Returning to the issue of people’s tendency to overestimate their work time. It would be an interesting study to compare

  1. the estimates of time worked of people who use mySelfQuantifier but do not consult its analysis tables with 2. those who log their behavior and consult the analysis tables, and with
  2. a control group who estimate their time without using mySelfQuantifier.

In addition, self-quantification systems can help scientists clarify their thinking about real-world human cognition, regardless of whether they are used to capture participants’ data. They could first be used by scientists on themselves. in that respect, mySelfQuantifier is an “intuition pump”.

Readings

Covey, S. R. (1989/2004). The 7 habits of highly effective people. Free Press.

mySelfQuantifier History: The Learning Kit Project

It might help to understand the history behind mySelfQuantifier.

From 2002 to Dec. 2009, at the Faculty of Education of Simon Fraser University, I (Luc Beaudoin) led teams of software developers, developing large scale human cognitive behavior logging and analysis software. The Principal Investigator of these projects was Prof. Phil Winne.. Many highly qualified educational psychologists, from several universities, collaborated on our projects. We designed and implemented several personal learning software environments: StatStudy (a Java app for learning statistics), gStudy (a Java app for learning just about anything, part of the Learning Kit Project), and nStudy (a web app for learning just about anything.) These were very ambitious, psychologically informed personal learning environments that were designed to help

  • learners learn,
  • content developers deploy learning content in a personal learning environment, and
  • help researchers run studies of various types, including experiments, and hence to manipulate independent variables and collect enormous amounts of highly structured and highly diverse user-data.

In these projects, we addressed very hard problems. We thought very long and hard about how to log data in such a way as to illuminate and hopefully infer user intentions and activities from users’ log data.

These projects were probably the most ambitious educational software research projects addressing the objectives listed above, generating mounds of data about a huge variety of meaningful self-regulated learning activities.

This history is relevant to mySelfQuantifier for many reasons. As mentioned above, like nStudy, mySelfQuantifier is meant to help researchers, tool developers, and end users alike. Years of thinking about logging and interpreting cognitive behavior has shaped mySelfQuantifier. mySelfQuantifier is more general than the other apps in that it deals with all kinds of activities. For instance, it can log offline data. Also, after grappling for years with the very difficult challenge of trying to infer users’ intentions and activities, I decided to radically simplify the problem with mySelfQuantifier: Let users explicitly record the projects, activities, goals and results that matter to them. Let them invoke other logging and analysis tools as needed. The Learning Kit project had many other beneficial influences on mySelfQuantifier not mentioned here.

(I should point out that the nStudy R&D project is still active. In 2010, I became an Adjunct Professor at SFU and founded CogZest. From CogZest, I spun off CogSci Apps Corp. Since 2010, I have designed and co-led the development of other psychological research tools, including the mySleepButton® family of apps. )

(This section calls for a separate article.)

Readings

In particular:

mySelfQuantifier: More History

Another bit of history for mySelfQuantifier is that I have been working on its core problems since 2002 when I felt the need to better track my time, activities and projects. I’ve been logging increasingly detailed data about my work since 2002, gradually refining and generalizing the system.

WARNING Regarding Obsessive-Compulsive Disorder (OCD)

To improve productivity with self-quantification it is important not to go overboard. With mySelfQuantifier, you could track almost everything you do. And you could in principle fill out all kinds of parameters. That is not the intention of this system. For some people, it will suffice to occasionally use the system in order to obtain certain metrics, and then stop using it for months. Others will want to use it on a daily basis, only rarely filling out the optional fact attributes (e.g., “rationale”, “next”, “side-tracked”). Yet others will systematically measure certain attributes, such as project and “deep work”.

The tendency to obsess, in the population, lies along a continuum. It is reasonable to speculate that people who extensively engage in time tracking tend to obsess more than others.

Obsessive-compulsive disorder is an anxiety disorder. It is reasonable to expect that using a self-quantification system could be harmful to people who have, or are at risk of developing, obsessive-compulsive disorder.

Obsessing over time tracking would interfere with your productivity and well-being.

If your obsessions interfere with your quality of life (ask those who know you very well, they might know better than you), then we would recommend (a) not to use this system, (b) to educate yourself on obsessive-compulsive disorder; and (c) to consider seeing a clinical psychologist about this treatable condition. ( OCD ought not be confounded with obsessive-compulsive personality disorder.)

We encourage scientists studying The Quantified Self to study the relations between OCD and self-quantification. They might find mySelfQuantifier to be useful for this purpose.

Relation to our other projects: In my Ph.D. thesis, I briefly speculated about a link between obsessive-compulsive disorder and perturbance (tertiary emotions). Perturbance is the key feature of tertiary emotions, and, I believe of obsessive-compulsive disorder. We have a project on perturbance: Affective Self-regulation: Volition, Emotion, Motivation, and Attitudes.

Recommended Readings

  • The Quantified Self http://quantifiedself.com/.
  • Evans, B. (2015). Let’s make advanced self-measurement more accessible: Bob Evans at the 2015 Quantified Self Public Health Symposium. Medium
  • Mackenzie, A. (2008). The affect of efficiency: Personal productivity equipment encounters the multiple. Thinking, 8)(2), 137–156. (A scholarly critique of _Getting Things Done.)
  • Perkins, D. N (1995). Outsmarting IQ: The emerging science of learnable intelligence. New York, NY: Free Press.

Disclaimers

Author

Luc P. Beaudoin