I wish to make it easier for new Kneaver users to enter their key domains and people during onboarding. I plan to use the new slate system while leveraging the semantic web, social media connectors capabilities of Kneaver.
Inspired by Helen Blunden’s post on mapping her Personal Learning Network, I decided to give it a try using the tools I have.
I made first some manual simulations using pen and paper, now I’ll move into coding stuff.
I will update the post as I make progress.
Basis on analysis.
Using friends, followers is not going to help. For many of us, those lists grew outside of manageable dimensions and serves more purposes than just learning.
Favorites don’t help us much either because they tend to be used as low intensity signals. To favorite a tweet indicate it has been read, noticed or appreciated more then an indication that the person want to bookmark it forever.
So the real ground of analysis is made of our tweets and our mentions (replies, retweets will also be here). The fact that the other party is following us or being followed will be an additional indicator of the long term interest we have for each other.
To be realistic the sample must be taken over 2 months. This allows to overcome short interruptions of relations due to lows in topics to share, time offline, overwhelmed or slow responding peers. It will also gives us the perspective of the relation.
So I will use 2 months of tweets I wrote and 2 months of mentions. Kneaver module KNVStreams is collecting this for me automatically 4 times per hour. It is stored in a table as well as a shard, like a compressed RSS feed.
Chicken and Egg problem
Knowledge areas first or people first? Depending on where you start, you will end up with very different results. This is because our interactions include out of band conversations, occasional encounters, signals to friends mixed with work oriented exchanges.
My idea is to do several iterations between content and people. First take all the tweets and run the natural language analyzer of Kneaver KNVNLP on them. It’s being used for 2 years to analyze Twitter chats so it’s pretty robust and has a huge corpus.
From this I will identify dominant topics.
I can then select those topics and run the analysis again to see with whom I exchange on those topics.
Let’s try …