A running log of stuff


from stackoverflow

I. Encryption and decryption of data

Alice wants to send a message to Bob which no one should be able to read.

Alice encrypts the message with Bob's public key and sends it over. Bob receives the message and decrypts it using his private Key. Note that if A wants to send a message to B, A needs to use the Public key of B (which is publicly available to anyone) and neither public nor private key of A comes into picture here.

So if you want to send a message to me you should know and use my public key which I provide to you and only I will be able to decrypt the message since I am the only one who has access to the corresponding private key.

II. Verify the identity of sender (Authentication)

Alice wants to send a message to Bob again. The problem of encrypting the data is solved using the above method.

But what if I am sitting between Alice and Bob, introducing myself as 'Alice' to Bob and sending my own message to Bob instead of forwarding the one sent by Alice. Even though I can not decrypt and read the original message sent by Alice(that requires access to Bob's private key) I am hijacking the entire conversation between them.

Is there a way Bob can confirm that the messages he is receiving are actually sent by Alice?

Alice signs the message with her private key and sends it over. (In practice, what is signed is a hash of the message, e.g. SHA-256 or SHA-512.) Bob receives it and verifies it using Alice's public key. Since Alice's public key successfully verified the message, Bob can conclude that the message has been signed by Alice.


Reading https://github.com/RangerMauve/local-first-cyberspace

on dat:

It's kinda like torrents, but it supports more files, and you can update the contents without needing to create a new archive.


fauna & graphql

Need a localhost graphQL server for a local database


See https://www.inkandswitch.com/local-first.html


while also giving our users a piece of software they can download and install, which we discovered is an important part of the local-first feeling of ownership

Meaning electron is an essential part of ssb

The fs API is the key to the whole local-first thing in ssb. Node + electron are what make it viable to store all data locally, and local data storage is what makes it a true p2p experience. For example, pubs are just traditional servers that store all your data too, but because you've downloaded all the data too, that's what makes it a cool p2p thing.

we want applications to outlive any backend services managed by their vendors, so a decentralized solution is the logical end goal.

Live collaboration between computers without Internet access feels like magic

Servers thus have a role to play in the local-first world — not as central authorities, but as “cloud peers” that support client applications without being on the critical path. For example, a cloud peer that stores a copy of the document, and forwards it to other peers when they come online, could solve the closed-laptop problem above.


Trying firebase and rxdb


11-14-2020 -- reading about beaker

Beaker stores user content on the device, and provides encrypted peer-to-peer transmission of the files.

Dat websites are executed in a restrictive sandbox on the user’s device. While traditional Web apps assume a connection to a remote host, Dats are detached and must request network rights specially.

Thick applications model

Rather than using remote services, Dat sites write user data to the local device with the localStorage, indexedDB, and Dat APIs.


By default, each dat:// origin is limited to 100MB of storage When the 100MB limit is reached, all writes attempted with the DatArchive API will fail.


It duplicates ingested data into IndexedDB, which acts as a throwaway cache. The cached data can be reconstructed at any time from the source Dat archives.



When you commit a change with Git, it accepts as author whatever value you want. This means you could claim to be whoever you want when you create a commit. To make GitHub (and everyone) believe that Martin authored that really terrible commit, I just had to run git config user.name and git config user.email with values that match Martin’s. Those are not hard to get at all: it only took me one minute to clone one of his repos then run git log in it. The committer details are designed just to identify who of your collaborators made a change, and are not meant to be used for authenticating people. Being able to impersonate other committers does not introduce a vulnerability per se. For example, just by setting my user.name to Martin’s, I do not get the ability to push code to his repositories: GitHub would require me to authenticate with his credentials before I could do that. if your Git hosting service allows that, you can also require with a policy that all commits must be signed. On GitHub, that’s done with protected branches.


Asymmetric cryptography uses two separate keys: a public key and a secret (or private) one. As their names suggest, while the secret key must be protected at all cost, the public one can (and as will be our case later on, must) be shared with the world. With asymmetric cryptography, you encrypt a message using your public key, and then decrypt it using the private one. If you wanted to share an encrypted message with your friend, you’d use your friend’s public key to encrypt it. Your friend could then use their own private key to decrypt and read your message. Algorithms like RSA or the various elliptic curves work this way. Despite being lesser-known among the general public, asymmetric cryptography is wildly used, and it’s what makes TLS used by HTTPS possible too, among other things In addition to encrypting data, asymmetric cryptography can also be used to sign messages (and verify signatures). This works the opposite way: you sign a message using your private key, and others can verify the signature using your public key.

Git commits are not signed by default, they are just a hash of the content and a pointer to the previous hash.

adding a cryptographic signature to the message

To do that you have to do two things in principle:

You calculate a hash (or checksum) of your message. You can use a hashing function such as SHA-256. As you know, hashing functions are one-way operations that generate a unique set of bytes from each message, and they cannot be reversed. The hex-encoded SHA-256 digest of “You and I will meet tomorrow at 11.30am” is: 579c4547d8dec2c4513de8c858a490a8a2679db205a0b3471f81d5b129d29b88. If you changed even just 1 bit in the original message (e.g. change the time to 11.31am), the final digest would be completely different (try it). You use your private key to sign the calculated hash, using algorithms like RSA.



What is fission?

When you create a Fission Account, whether signing up on the web or using the command line as a developer, it creates a username and email address in our service database, and also a private / public key pair representing that account.

We also create a Fission Web Native File System (WNFS) attached to your account, and given you access to Fission Drive, which lets you browse all your files, access them from any browser, and see which apps are attached to your file system.

Each device gets their own private key using the WebCrypto API built into modern browsers. Private keys shouldn't be copied around, so instead, we link keys indicating they have access to the same account.

There is no "sign out" for a Fission-powered app. You use your key to do a passwordless login, stored in your local desktop browser, mobile web browser, or your local desktop file system with the command line developer tool.

You may create multiple Fission accounts, but you'll need a unique email address and username for each one. You'll also need to use Browser Profiles to be able to access them at the same time on the same machine, as the keys that grant access are stored in the browser.

Sounds a lot like ssb & pubs, but with more advanced ID parts.

To have access to your account across multiple devices, you need to link them. They have multi-device

Eventual gram update

Need to make a backend that functions as a pub, but the API is exposed over REST instead of RPC & websockets. That way it can be hosted as lambda functions.



Should be doing ssc today.



todos demo, not video

video intro

offline first demo

Have started reading about textile thread db -- an offline-first local database that syncs to the distributed web

video intro

indexedDb wrapped with thread API

These are less radical b/c it is a local cache, not a full replica.

the remote is considered the “source of truth”

ThreadDB aims to help power a new generation of web technologies by combining a novel use of event sourcing, Interplanetary Linked Data (IPLD), and access control to provide a distributed, scalable, and flexible database solution for decentralized applications.


Found cypher-net, and old dominic project.

Watched an old video -- 2013 Realtime Conf. git replication -- group the hashes into common prefixes, and then hash each group. That way you can tell if any of the groups contains a change, so it's more efficient to replicate.

Can't get the tags to work on the main ssb network. It returns undefined or something like that. The new plan is to write things in this readme and then parse the markdown and write it to the website, and copy paste to ssb. I guess i could also just get a stream of this feed, which is mostly development logs.

I feel like I haven't gotten too much done this week. I applied for a job. It's at some kind of local dev shop here in bellingham weirldy. So that will be nice if that works out and I can move back to california. Otherwise just kind of poked around with things. Some ssc stuff.

ssc is what i've called the next project btw. it a 'pub', but made with unique code (it's not a regular ssb peer). ssc is like ssb but newer because c comes after b in the alphabet. Its ssb, but run with contemporary things -- put it on netlify, use faunaDB for storage, lambda functions. It's not any less decentralized than the current system with pubs sort of, but it uses more boring stuff -- no webscokets or RPC. The part that might be less decentralized is that it is different code than the 'client' apps. Also have to use a different network b/c it uses different protocols.



In the back of my mind is the memory app -- basically a graph database that has a UI. I want to use levelgraph, but have been thinking that I could use it will an ssb-like network also, which i guess means things would be easy to replicate/share.



via Dominic %pYmFr6d0QwLP+YG0VNoo75PP7eYNZ1Y8C2MC9IjF5aw=.sha256 :


why flume?

Since I saw from flume-rs readme @piet still didn't understand my flume documentation I'm gonna try explaining some high level things again here. Hope this helps.

Could you build scuttlebutt on just a key value store?

Well, it was originally, but it evolved towards a log oriented store and flume was refactored out of it. The problem with the key value store is that the user doesn't get to choose the keys. The key is the hash of the message. I can't create a message with a hash that you are expecting - that's basically impossible (it's a hash collision). But you can put something in a message (such as the hash of my message) and I can lookup messages that contain that something. The tool that helps you look up things is called an index. Since we don't get to choose the keys, scuttlebutt's database is not really useful without the indexes.

Some of the client apps other that patchwork do display the feeds in log order, [email protected]<=6, patchfoo, patchless. But the main user of the log order is the indexes.

Before the log oriented refactor, the primary store was a leveldb the keys were the hash and the value was the message. That's why scuttlebutt returns js objects that are {key: msg_id, value: msg} There were also several indexes. I think clock [author, seq], log [timestamp], feed [value.timestamp], user feed [author, timestamp], links, maybe some others too. When a message was appended to the database, the relevant indexes were also created. These where all written in a single batch to the same leveldb instance. (this is important) this meant that the entire write, message plus indexes, either succeeded or failed together which gaurantees that the indexes match the data.

The big problem with this model was that it was hard to change how the indexes worked, or add a new index. If index data was only added at write time, what if you have a database full of data, and want to add another index? or fix a bug in an index? There wasn't any systematic way to do this. But then I realized, you could use the log (message receive time) index for that - the index could reprocess all the messages in receive order, like it would have if it had been running at the time the message was received! Also, added bonus: if the reindex crashed or was shutdown part way, it could continue processing from that point, instead of starting over.

At this time, (the days of [email protected] and 2) at startup, patchwork scanned all the messages and built up some in memory data structures, such as the friends graph. This delayed startup ~10 seconds at the time, but doing this now would take several minutes! However, the idea of in memory aggregations is a good solution to some things, hence we have flumeview-reduce. (of course, since these are coded in a somewhat ad hoc way, it's likely they have bugs that need to be fixed, so rebuilding indexes is particularily important)

The whole point of the log is to make (re)building indexes and views easy.

Another way to think of it: if patchwork wasn't decentralized, but was just a website, backed by say, mongodb - you wouldn't do things like we do them at all. Instead of storing every message as it's own record, when you replied to a thread, they'd make a http request to update the key representing that thread. But that wouldn't work with scuttlebutt, because there isn't anyone with the authority to decide whether a given update to a thread was valid or not, so instead of storing the mutable state of a thread, we store the immutable updates to it. Then when we want to view the mutable state, we collect the updates and regenerate it. It's as if, instead of storing the thread, you stored the http requests to update the thread, then replayed it.

Scuttlebutt really is an unusual database. Firstly, it's somewhat unusual because it has master-master replication - most databases don't have that, and some have it tacked on. ssb takes that one step further, but making replication the most important feature and makes many horrible compromises in order to make that feature work well. (such as: not being able to choose your keys, not delete messages, only be able to append new messages instead of update things)

This was a reaction to couchdb - which had a replication feature, but it didn't work really great because it allowed you to update messages and choose your own keys. couchdb replication would work for a federated application, or redundant servers, or a central hub + mirrors, but not a truely decentralized design. However, couch did have some cool things - like user definable views/indexes (based on map and reduce functions) and access to the internal log. (it was exposed as part of the replication feature, but I used it several times for various things, but that's another story) Prehaps most importantly, it was a database created by an open source community, not a corporation. I met a number of nice people who worked on couchdb! (it was the opposite of mongo, which just had a slick website, made by a company, open source, sure, but not community driven in the same way couchdb was)

If anyone still has questions lets do a flume db call - AMA about flume! @piet @mix @dinosaur @aljoscha @rabble @cel

Organizing things is hard. The web is supposed to help b/c it has global text links, but servers disappearing is a hurdle. Then there is hyper*, which i still need to learn about.



read about these

merkle tree logs -- use a binary tree instead of a log

partial replicateion

merkle trees in ssb-viewer

Ssb provides ooo messages, where a server can ask its peers to deliver it a message of a certain hash

you trust that whoever gave you that hash has already verified the validity of that message.

Searching the web for the origins of the hash chain


I realized today that it is impossible to do this site on netlify, because the databse for photos is only available on my computer, thanks to ssb.



Woke up with a headache today. Took some ibuprofen and now it's ok.

Found out about ssb-keys-mnemonic today.


Reading about hypercore today.

Think lightweight blockchain crossed with BitTorrent.

Each peer can choose to download only the section of the log they are interested in, without having to download everything from the beginning.

A nice merkle-tree illustration in 'Secured by Merkle trees and cryptography'.

To support mutable datasets, Hypercore uses asymmetric cryptography to sign the root of the Merkle tree as data is appended to it.

to read




What is durable-object compared to KV

I think the durable-object thing gives you some co-location, so you have a strong consistency guarantee, and also it is persisted and location agnostic. Whereas KV is eventually consistent. Durable-Objects use the same memory in additin to storage, so it is immediately consistent. I think.

Durable objects look like lambda functions, but with a lifespan longer than just the function running

Workers-KV is like a database. eventually consistent



Watching this and noting the stuff: Leveraging 11ty in Healthcare

"Didn't have to do any of that API programming"

Finally read this: https://0fps.net/2020/12/19/peer-to-peer-ordered-search-indexes/

reading cloudlflare worker KV

It's kind of weird how I can't find where the cloudflare functions can be placed in my source repo. I can only see the workers in the weird little browser editor from cloudflare.

Use a github action to deploy a worker script

So it looks like you would create a separate repo for the cloudflare workers, and use a GH action to deploy them when they change




Need to get 'eventual-gram' working today. Need to do the routing, make sure the invite-code page is ok


Sent this log as a 'writing sample', then realized i should have sent the fiend guide.


Heard of terminus DB and the podcast today



The cypress tests are nice just because they have the UI commponent. It is just nice to be able to see the website as it is testing.


I woke up quite late today. Am not sure why

a big list of websites from my phone

What is ssb-feed? it looks like what I'm doing with ssc -- in-memory merkle dag functions.


Found out about https://github.com/mikeal/dagdb from the podcast open hive


ssb-browser -- how does storage work?

ssb-browser storage limit -- 80%?

phone stuff



I added some pictures to the website yesterday. That counts as something. Might want to try combinig eleventy with the current hyperstream build command. These build scripts are getting quite laborious.

Have a small headache today and took some advil.

phone stuff


Reading about https://render.com/ today.

They can host backend processes (servers). express

Websockets might work with the services thing



Have been just sitting and looking at twitter and stuff today. Eventually maybe I'll do the next issue -- testing the 'set your username' thing.


phone stuff


Working on eventual-gram -- following people



phone stuff


I don't know what I'm working on right now and that bothers me. I would like to have a single thing that I'm devoting energy to. Instead it's like a blur of different projects. I need to make sense of the decentralized things -- hyper*, 3box, textile, ssb, ceramic, render.com, fission. That could be a thing in it's own right -- finding an 'ideal stack' of decentralized things. Yes, that's the next thing after I get an ssb version of eventual-gram ready.

Life has a strange feeling at the moment. It's like i don't know what I'm working on, and it feels like I'm always busy, but I'm never getting anything done.

Add to that I've just regrown my brain 🧠 and I'm always worried that I'm just dumber now and life is more confusing and frustrating for that reason.

I'm looking for a way to deal with life, i suppose, to make some sense of the blur of things. In the past I think I just felt better about it. I don't know.


speakeasy -- WebNative: How to put a full stack directly in the browser





The internet is not working this morning, the day I have a job interview.


Tried looking in the patchwork source for how they do the avatar images for a person who doesn't have an avatar, but couldn't find something helpful. I guess it's searching npm & google now.








swarm.on('peer', function (stream, id) {
  console.log('CONNECTED', id)
  streams[id] = stream
  onend(stream, function () { delete streams[id] })

    .pipe(through(function (line, enc, next) {
      var parts = line.toString().split(',')
      var msg = parts.slice(1).join(',')
      var msgid = parts[0]

      // this is where we add an incoming msg to our UI
      if (addMsg(msgid, msg) === false) return next()

      Object.keys(streams).forEach(function (sid) {
        if (sid === id) return
        // this is where we broadcast an incoming msg to the other peers
        streams[id].write(line + '\n')






Watched fission video chat tpday.

project cambia An ink & switch project.





There is enough free things now that you can properly play in the world. Like I just deployed a signalhub to a heroku domain from the button on the signalhub readme. I don't really know how it all works, the signalhub deployment, and that bugs me. What source code is heroku using? But that's not relevant to today's stuff.

I'm starting to enjoy this whole being unemployed thing. I finally have time to learn about all these random things. There are so many random things in the world.

I'm still travelling back in time ~5 years, looking at hyperlog and signalhub and webrtc-swarm. This is an interesting point in time. It feels like there is much less energy now towards experiments with p2p things. But there are some companies now

Maybe this is the end of an era so to speak. No more "centralized" services. It's kind of interesting to consider how to monetize p2p stuff. That's a rabbit hole I don't have time for at this time of night.

Spent the evening using signalhub and webrtc-swarm to make a chatting thing -- https://github.com/nichoth/hub-life . Or you can use the app here -- https://hub-life.netlify.app/ . It based on this demo from substack. I'm kind of amazed that you can deploy something like this for free. It's an important feeling -- that you are getting something for free. That why people like bicycles and sailing so much I think.


development diary


Have spent the last few days fiddling with css & html for my website . I guess it's ok looking. A lot of dealing with css grid… grid was supposed to make everything so much easier… Also have made a GH org for the design company that I guess I am a part of. Erin wanted to start a company. Starting a company with someone… it's like the contemporary version of marriage.

A memory keeps floating through my mind. It's stuck there permanently apparently. My CS teacher in college once said "programming is like juggling. When you first start out you can juggle maybe 3 balls. And the best juggler in the world can juggle maybe like 9 balls. It's not like the best person can juggle 1 million balls."

For those just tuning in to the saga, I crashed a bicycle and almost died a while ago, and after having lost my brain and semi-regrown it, that feels right. It still always feels the same writing code, weirdly. There is always the same level of frustration and challenge. But the difference is that when your brain is missing you are working on much 'easier' things. It doesn't feel any harder, you just do less work.


Electron won this round. Or maybe it was GH actions that won. Anyway, haven't figured out how to make GH actions build an electron app. I did the instructions that it says in this repo. Using this demo repo for now. The odd part is that it works fine to build it on my local computer, but when it's built by the GH action, and I start the app via terminal, it says it can't resolve some local files. Now I could either start looking at Gh actions and go down that rabbit hole until I understand them completely, or say 'ok', and keep working on the app, and just manually upload the release binaries to GH.

I think it will be uploading binaries for now. Then if the app ever get to a point where I feel ok about it, looking at the GH action more.

It has been humbling to work with electron, I will say that.

In other battles, I have been thinking about how to deal with client side routes. I thought about writing about it, but it's not really interesting enough to get into. Basically the state/functionality for routes is slightly duplicated -- there is a 'router' but also part of the app state called 'route', that also gets matched against a router.




Today have figured out the psych-city bug -- it was not running eleventy to build the site as part of the build script. I didn't notice it on my local machine because it would run eleventy --serve.

Also have updated the nav and spacing on my website.


Did the tinaCMS introductory tutorial

Need to learn how to do backend

Looked at tinaCMS quite a lot today. I have a feeling that we should avoid it as much as possible, unless the client wants something more wysiwyg than netlify CMS can offer.








See how much you can break down the ssb pattern. Can you pipe through ssh, private-box, and hyper-swarm?

The hyper pattern would be hyperdrive for blobs and byperbee for posts & metadata probably.

What is signal-hub?



webrtc-swarm calls hub.subscribe(uuid) on hub


I think signal-hub is the precursor to a direct p2p connection. A value (the url) known by all potential peers so that you have a way to meet new peers.


What am I doing?