Home > Blog > Identifying what’s coming in

Identifying what’s coming in

When I stopped thinking in term of Knowledge, the obvious alternative was to think in term of Information. Instantly I slip on step further to Information flows and what is coming to me.

This post is a follow of A PKM Clean Start

My goal here is to show how I identify each and every incoming fragment and a suggested way to analyze it. I take my example but I generalized it based on past experience and how I know others experience it.

This first part is going at length on the physical level of incoming fragments, the second will be more on select types of messages.

Fragment: I use this term fragment instead of message or document to avoid anticipating that the incoming thing is complete by itself.

Filer, Kneaver Desktop Filer: Is my goto tool for this process. I call it Filer in short. It’s a tool providing a holistic view of every document system I use: mail, folders on the hard disk, download areas, webpages, Evernote. Instead of looking in 3 different places for a customer, I look only one place. So for most of my readers ‘Filer’ will be ‘Evernote’.

KneaverDB is the semantic database part of Kneaver.

Incoming: What is coming in.

“incoming” is really in strict opposition to what I create for others to consume or what I create for me to keep track or offload memory to my outward brain. Of course, it’s a muddy definition. A comment on my writing will include some of my writing, a note I kept for myself may end up being published or shared. A text I copy for further reference is really an incoming document. let’s be tolerant and do with some fuzziness As you will see later it’s not critical.

“Incoming” taken strictly is only what is pushed to me as opposed to what I pull. Now this is muddy again. The last mile of mail is POP3 and works in pull mode. It doesn’t imply mail is pulled because the decision, the intent that led to this mail existing in my mailbox was due to someone else. So generally speaking incoming will be whatever I didn’t control in the hours before it came.

I’m going to anticipate and start considering incoming fragments in term of actionable as well as my former consideration based on does it qualify as knowledge.

The functional viewpoint privilege “what can I do with it” over “what is it”. Javascript popularized a functional approach taking over the sacrosanct object oriented approach of the 1980’s. GTD does the same as well as good old accounting document management my mother was teaching. The difference is that we will add a wider range of actions.

Let’s go

Snail mail, Let’s go paperless

This is what comes into my metal mailbox outside via the post. It includes also any mail that comes via all the mailboxes companies I used for my various business.

Open mail
Non-targetted advertising, call to renew old subscriptions => delete
Scan everything else, write an “s” on lower right side of each side.
For every time sensitive and important document I scan the envelope as well to keep a trace of where it was posted, how long it took to reach me.

I’ve been doing this for the last 20 years. This means that every paper I received is now archived on my hard disk. When I travel for months I carry with me all my archives. I can access an old invoice from a supplier, a letter from a past customer at any time. It has been a major element of my ability to become location independent. It’s not something you start 2 months before you start for your first long distance trip. The value is in the ability to go back for long periods.

I do a first review of the mail and drop everything without any legal value. I retain a hard copy of anything with legal value (official letters, contract related documents). Anything that can be used in court.

The paper documents go on a large stack I rarely sort anymore. It just accumulates there and I archive years after years. a few times a year I go through it and fetch a precise document I need. I already know in wich box to look. I keep 3 archive boxes: Personal, Admin France, Admin US. 80% of the documents have accounting value.

This goes into my day folder. Everyday Kneaver Filer will start a new folder. Scanned documents will be named by the date as well like every note I take. It’s not a necessity, everything I download will go there as well and retain its name but a database will keep extra attributes like creation date.

There are also magazines, books, and catalogs.

Books are entered in a database, get an inventory number. I write this number on the first page. I will later refer to this number on my notes, faster than copying the title.

Catalogs get on a stack and accumulate dust. Residual FOMO.

Magazines stay around and are after archived in a box if they have some value. Valuable magazines are also stored in a database for same reasons. I got my education in indexing and archiving strategy from a professional head librarian ex-girlfriend. Strong education, believe me. My database is organized according to Dewey standards and uses DC, Dublin Core descriptions. Of course, it’s part of Kneaver DB kernel.

I keep free local magazines for my visitors. They are happy to get some reading on what goes around, events, interesting places to visit.

Digital incoming fragments, let’s go stream

Digital incoming fragments reach me via protocols and tools. For most of you, you see the tools. As a software engineer totally fluent in APIs, services and standards it’s easier to consider them as streams.

A stream has a source. The source if often a service.
A service could be like “twitter”, “facebook”.
A source could be like a mailbox
Sometimes it’s a bit of both
A service twitter with an account “kneaver” will deliver the direct messages (DM) for kneaver.

The stream is made of items delivered to me via a protocol served by the service. For your education POP3 will be common for emails, REST API for services like twitter.

Each item of the stream as an emitter, a sender. The person who had the intent to reach me, nominatively or not.

Each item as a payload. Usually a text. The text is written in a format like HTML, plain text or Markdown.

Each item can be independent or be part of a larger conversation like responses, replies, retweets, favorites, bookmarks. In those cases, there is a reference to another item.

If the item is just a chunk of text and is not independent it’s totally impossible to make any sense of it without aggregating every related item.

It’s a bit complex but it’s important to scratch the varnish to understand why accessing a single item like a system like IFFT is doing is often insufficient to deal with it. The referenced item could be old, out of scope, The language used will depend on the emitter. If some near friend tells me “call me” it’s totally different from a spammer trying to lure me. An approval message on Twitter “I like this” from an unknown source unrelated to me is likely spam while it can be the expected sign from an influencer or a kind sign of interest from a WOL connection.

With 5 snail mail per day, the job was easy for our ancestors. We deal with 100 emails and 200 social media notification per day. We need to filter, we need to automate. Without some understanding of how it goes it’s prone to fail or overload you.

How many streams do I have ?

About 30.

5 mailboxes: 2 private, 3 professional

Newsletters are sorted out instantly and automatically and moved in the RSS river.

4 twitter accounts

On each Twitter account, I distinguish notifications, lists, and direct messages.

One facebook account

One facebook group

One facebook page

I almost gave up and totally stop monitoring LinkedIn and Google Plus

I have a large set of blogs I follow via RSS (it’s called a river) but very rarely dip into it anymore. Yet I do monitor it.

At this stage what is important is:

– Each message is either targetted to a large audience or just for me and a tiny group (less than 20)
– Each message is either independent or heavily dependent on a context
– Important messages have been preserved we are free to move the messages around and even discard them on need.

What’s next

This is it for this part. We saw how to move from paper to paperless and how to consider every digital stream from a unified point of view.

We will go next into seeing what is likely to come into each of those items and what can be done with it. Separating noise, what we could possibly use but shouldn’t spend even one more second on, prioritizing what we should deal with. That’s where decisions will take place or even better where avoiding taking decisions will be important. For that having a framework, a strategy, a toolbox full of handy tools and a compass will be important.