Review of Typlog as a turnkey platform for IndieWeb as a Service

Review of Typlog as a turnkey platform for IndieWeb as a Service
Yesterday I ran across a tweet in the IndieWeb chat announcing that Typlog, a hosted website/blogging platform, now supports Webmention.

https://platform.twitter.com/widgets.js

I looked at their website, and it also looks like they support a few other IndieWeb building blocks including WebSub and RelMeAuth by leveraging Twitter and GitHub. (The developer indicated they supported IndieAuth, but I highly suspect it’s just RelMeAuth, which is still a solid option for many IndieWeb tools.) 

Having just put together a Quick Start IndieWeb chart that includes services like micro.blog, i.haza.website, and pine.blog, I was immediately intrigued. This new platform (proprietary and not self-hostable, but very similar to the others) looks like a solid looking little platform for hosting one’s personal website (or podcast) that includes some IndieWeb building-blocks.

It’s got a 7 day free trial, so naturally I spun up a quick website. With just a few simple defaults, I had something pretty solid looking in only a few minutes with a pleasant on-boarding experience.

I’ll note that some functionality like importing content from WordPress, Tumblr, Ghost, or a podcast feed requires an actual subscription. Once you’ve finally subscribed, there are instructions to set it up to use your own domain name. However, most of the basic functionality is available in the trial. Another important indie feature is that it has a built-in export using JSON format, so that one can take their domain and content to another service provider if they wish.

It looks like it’s got a ton of common useful features! This includes support for podcasting, password protected posts, scheduling posts, membership posts, and integrations for Stripe, CloudFlare, Google Analytics, and MailChimp among many others. The platform is built with some basic and beautiful page templates and prefers to have markdown in the editor, but seems to work well with raw HTML.

They also allow adding custom code into <header> and <footer> so it should be straightforward to add support Microsub to one’s site using a service like Aperture so that you can have (feed) reader support.

Unfortunately it looks like there’s no Micropub support yet. I suspect that Typlog would be quite pleased to have a number of posting applications for both desktop and mobile available to it by adding this sort of support.

Also on testing, it looks like while the platform supports incoming Webmention, it doesn’t seem to be sending webmentions to links within posts. (Perhaps they’re batch processed asynchronously, but I haven’t seen anything yet.)

The platform seems to do really well for posting articles and podcasts and even has a custom template for reviews, but all of the user interface I’ve seen requires one to add a title on all posts, so it doesn’t lend itself to adding notes (status updates) or other indie-like posts like bookmarks, likes, or simple replies. It has a minimal built in h-card, but it could be expanded a bit for sending webmentions.

The pricing for the service starts at a very reasonable $4/month and goes up to $12/month with some additional discounts for annual payments.

In sum, I love this as another very indy-flavored web hosting service and platform for those looking to make a quick and easy move into a more IndieWeb way of hosting their website and content. While services like micro.blog and i.haza.website may be ahead of it on some technical fronts, like pine.blog, Typlog has a variety of different and unique features that many are likely to really appreciate or wish that other services might have. I imagine that over time, all of them will have relative technical parity, but will differentiate themselves on user interface, flexibility, and other services. I could definitely recommend it to friends and family who don’t want to be responsible for building and managing their websites.

One of my favorite parts of Typlog is that the company building it is based in Japan, where I’ve seen a little bit of development work for IndieWeb, but not as much as in portions of Europe, America, or Australia. It’s been great seeing some growth and spread of IndieWeb philosophy and platforms in Asia, Africa, and India recently.

And of course, who couldn’t love the fact that the developer is obviously eating their own cooking by using the platform to publish their own website! I can’t wait to see where Typlog goes next.

This post was originally published on Chris Aldrich

Advertisement

Domains 2019 Reflections from Afar

My OPML Domains Project

Not being able to attend Domains 2019 in person, I was bound and determined to attend as much of it as I could manage remotely. A lot of this revolved around following the hashtag for the conference, watching the Virtually Connecting sessions, interacting online, and starting to watch the archived videos after-the-fact. Even with all of this, for a while I had been meaning to flesh out my ability to follow the domains (aka websites) of other attendees and people in the space. Currently the easiest way (for me) to do this is via RSS with a feed reader, so I began collecting feeds of those from the Twitter list of Domains ’17 and Domains ’19 attendees as well as others in the education-related space who tweet about A Domain of One’s Own or IndieWeb. In some sense, I would be doing some additional aggregation work on expanding my blogroll, or, as I call it now, my following page since it’s much too large and diverse to fit into a sidebar on my website.

For some brief background, my following page is built on some old functionality in WordPress core that has since been hidden. I’m using the old Links Manager for collecting links and feeds of people, projects, groups, and institutions. This link manager creates standard OPML files, which WordPress can break up by categories, that can easily be imported/exported into most standard feed readers. Even better, some feed readers like Inoreader, support OPML subscriptions, so one could subscribe to my OPML file, and any time I update it in the future with new subscriptions, your feed reader would automatically update to follow those as well. I use this functionality in my own Inoreader account, so that any new subscriptions I add to my own site are simply synced to my feed reader without needing to be separately added or updated.

The best part of creating such a list and publishing it in a standard format is that you, dear reader, don’t need to spend the several hours I did to find, curate, and compile the list to recreate it for yourself, but you can now download it, modify it if necessary, and have a copy for yourself in just a few minutes. (Toward that end, I’m also happy to update it or make additions if others think it’s missing anyone interesting in the space–feedback, questions, and comments are heartily encouraged.) You can see a human-readable version of the list at this link, or find the computer parse-able/feed reader subscribe-able link here.

To make it explicit, I’ll also note that these lists also help me to keep up with people and changes in the timeframe between conferences.

Anecdotal Domains observations

In executing this OPML project I noticed some interesting things about the Domains community at large (or at least those who are avid enough to travel and attend in person or actively engage online). I’ll lay these out below. Perhaps at a future date, I’ll do a more explicit capture of the data with some analysis.

The largest majority of sites I came across were, unsurprisingly, WordPress-based, which made it much easier to find RSS feeds to read/consume material. I could simply take a domain name and add /feed/ to the end of the URL, and voilà, a relatively quick follow!

There are a lot of people whose sites didn’t have obvious links to their feeds. To me this is a desperate tragedy for the open web. We’re already behind the eight ball compared to social media and corporate controlled sites, why make it harder for people to read/consume our content from our own domains? And as if to add insult to injury, the places on one’s website where an RSS feed link/icon would typically live were instead populated by links to corporate social media like Facebook, Twitter, and Instagram. In a few cases I also saw legacy links to Google+ which ended service and disappeared from the web along with a tremendous number of online identities and personal data on April 2, 2019. (Here’s a reminder to remove those if you’ve forgotten.) For those who are also facing this problem, there’s a fantastic service called SubToMe that has a universal follow button that can be installed or which works well with a browser bookmarklet and a wide variety of feed readers.

I was thrilled to see a few people were using interesting alternate content management systems/site generators like WithKnown and Grav. There were  also several people who had branched out to static site generators (sites without a database). This sort of plurality is a great thing for the community and competition in the space for sites, design, user experience, etc. is awesome. It’s thrilling to see people in the Domains space taking advantage of alternate options, experimenting with them, and using them in the wild.

https://platform.twitter.com/widgets.js

I’ll note that I did see a few poor souls who were using Wix. I know there was at least one warning about Wix at the conference, but in case it wasn’t stated explicitly, Wix does not support exporting data, which makes any potential future migration of sites difficult. Definitely don’t use it for any extended writing, as cutting and pasting more than a few simple static pages becomes onerous. To make matters worse, Wix doesn’t offer any sort of back up service, so if they chose to shut your site off for any reason, you’d be completely out of luck. No back up + no export = I couldn’t recommend using.

If your account or any of your services are cancelled, it may result in loss of content and data. You are responsible to back up your data and materials. —Wix Terms of Use

I also noticed a few people had generic domain names that they didn’t really own (and not even in the sense of rental ownership). Here I’m talking about domain names of the form username.domainsproject.com. While I’m glad that they have a domain that they can use and generally control, it’s not one that they can truly exert full ownership over. (They just can’t pick it up and take it with them.) Even if they could export/import their data to another service or even a different content management system, all their old links would immediately disappear from the web. In the case of students, while it’s nice that their school may provide this space, it is more problematic for data portability and longevity on the web that they’ll eventually lose that institutional domain name when they graduate. On the other hand, if you have something like yourname.com as your digital home, you can export/import, change content management services, hosting companies, etc. and all your content will still resolve and you’ll be imminently more find-able by your friends and colleagues. This choice is essentially the internet equivalent of changing cellular providers from Sprint to AT&T but taking your phone number with you–you may change providers, but people will still know where to find you without being any the wiser about your service provider changes. I think that for allowing students and faculty the ability to more easily move their content and their sites, Domains projects should require individual custom domains.

If you don’t own/control your physical domain name, you’re prone to lose a lot of value built up in your permalinks. I’m also reminded of here of the situation encountered by faculty who move from one university to another. (Congratulations by the way to Martha Burtis on the pending move to Plymouth State. You’ll notice she won’t face this problem.)  There’s also the situation of Matthew Green, a security researcher at Johns Hopkins whose institutional website was taken down by his university when the National Security Agency flagged an apparent issue. Fortunately in his case, he had his own separate domain name and content on an external server and his institutional account was just a mirrored copy of his own domain.

If you’ve got it, flaunt it.
—Mel Brooks from The Producers (1968), obviously with the it being a referent to A Domain of One’s Own.

Also during my project, I noted that quite a lot of people don’t list their own personal/professional domains within their Twitter or other social media profiles. This seems a glaring omission particularly for at least one whose Twitter bio creatively and proactively claims that they’re an avid proponent of A Domain of One’s Own.

And finally there were a small–but still reasonable–number of people within the community for whom I couldn’t find their domain at all! A small number assuredly are new to the space or exploring it, and so I’d give a pass, but I was honestly shocked that some just didn’t.

(Caveat: I’ll freely admit that the value of Domains is that one has ultimate control including the right not to have or use one or even to have a private, hidden, and completely locked down one, just the way that Dalton chose not to walk in the conformity scene in The Dead Poet’s Society. But even with this in mind, how can we ethically recommend this pathway to students, friends, and colleagues if we’re not willing to participate ourselves?)

Too much Twitter & a challenge for the next Domains Conference

One of the things that shocked me most at a working conference about the idea of A Domain of One’s Own within education where there was more than significant time given to the ideas of privacy, tracking, and surveillance, was the extent that nearly everyone present gave up their identity, authority, and digital autonomy to Twitter, a company which actively represents almost every version of the poor ethics, surveillance, tracking, and design choices we all abhor within the edtech space.

Why weren’t people proactively using their own domains to communicate instead? Why weren’t their notes, observations, highlights, bookmarks, likes, reposts, etc. posted to their own websites? Isn’t that part of what we’re in all this for?!

One of the shining examples from Domains 2019 that I caught as it was occurring was John Stewart’s site where he was aggregating talk titles, abstracts, notes, and other details relevant to himself and his practice. He then published them in the open and syndicated the copies to Twitter where the rest of the conversation seemed to be happening. His living notebook– or digital commmonplace book if you will–is of immense value not only to him, but to all who are able to access it. But you may ask, “Chris, didn’t you notice them on Twitter first?” In fact, I did not! I caught them because I was following the live feed of some of the researchers, educators, and technologists I follow in my feed reader using the OPML files mentioned above. I would submit, especially as a remote participant/follower of the conversation, that his individual posts were worth 50 or more individual tweets. Just the additional context they contained made them proverbially worth their weight in gold.

Perhaps for the next conference, we might build a planet or site that could aggregate all the feeds of people’s domains using their categories/tags or other means to create our own version of the Twitter stream? Alternately, by that time, I suspect that work on some of the new IndieWeb readers will have solidified to allow people to read feeds and interact with that content directly and immediately in much the way Twitter works now except that all the interaction will occur on our own domains.

https://platform.twitter.com/widgets.js

As educators, one of the most valuable things we can and should do is model appropriate behavior for students. I think it’s high time that when attending a professional conference about A Domain of One’s Own that we all ought to be actively doing it using our own domains. Maybe we could even quit putting our Twitter handles on our slides, and just put our domain names on them instead?

Of course, I wouldn’t and couldn’t suggest or even ask others to do this if I weren’t willing and able to do it myself.  So as a trial and proof of concept, I’ve actively posted all my interactions related to Domains 2019 that I was interested in to my own website using the tag Domains 2019.  At that URL, you’ll find all the things I liked and bookmarked, as well as the bits of conversation on Twitter and others’ sites that I’ve commented on or replied to. All of it originated on my own domain, and, when it appeared on Twitter, it was syndicated only secondarily so that others would see it since that was where the conversation was generally being aggregated. You can almost go back and recreate my entire Domains 2019 experience in real time by following my posts, notes, and details on my personal website.

So, next time around can we make an attempt to dump Twitter!? The technology for pulling it off certainly already exists, and is reasonably well-supported by WordPress, WithKnown, Grav, and even some of the static site generators I noticed in my brief survey above. (Wix obviously doesn’t even come close…)

I’m more than happy to help people build and flesh out the infrastructure necessary to try to make the jump. Even if just a few of us began doing it, we could serve as that all-important model for others as well as for our students and other constituencies. With a bit of help and effort before the next Domains Conference, I’ll bet we could collectively pull it off. I think many of us are either well- or even over-versed in the toxicities and surveillance underpinnings of social media, learning management systems, and other digital products in the edtech space, but now we ought to attempt a move away from it with an infrastructure that is our own–our Domains.

Domains 2019 Reflections from Afar was originally published on Chris Aldrich

This has to be the most awesome Indieweb pull request I’ve seen this year.

WithKnown is a fantastic, free, and opensource content management service that supports some of the most bleeding edge technology on the internet. I’ve been playing with it for over two years and love it!

And today, there’s another reason to love it even more…

This is also a great reminder that developers can have a lasting and useful impact on the world around them–even in the political arena.

This has to be the most awesome Indieweb pull request I’ve seen this year. was originally published on Chris Aldrich

Notes from Day 2 of Dodging the Memory Hole: Saving Online News | Friday, October 14, 2016

Notes from Day 2 of Dodging the Memory Hole: Saving Online News | Friday, October 14, 2016

If you missed the notes from Day 1, see this post.

It may take me a week or so to finish putting some general thoughts and additional resources together based on the two day conference so that I might give a more thorough accounting of my opinions as well as next steps. Until then, I hope that the details and mini-archive of content below may help others who attended, or provide a resource for those who couldn’t make the conference.

Overall, it was an incredibly well programmed and run conference, so kudos to all those involved who kept things moving along. I’m now certainly much more aware at the gaping memory hole the internet is facing despite the heroic efforts of a small handful of people and institutions attempting to improve the situation. I’ll try to go into more detail later about a handful of specific topics and next steps as well as a listing of resources I came across which may provide to be useful tools for both those in the archiving/preserving and IndieWeb communities.

Archive of materials for Day 2

Audio Files

Below are the recorded audio files embedded in .m4a format (using a Livescribe Pulse Pen) for several sessions held throughout the day. To my knowledge, none of the breakout sessions were recorded except for the one which appears below.

Summarizing archival collections using storytelling techniques



Presentation: Summarizing archival collections using storytelling techniques by Michael Nelson, Ph.D., Old Dominion University

Saving the first draft of history


Special guest speaker: Saving the first draft of history: The unlikely rescue of the AP’s Vietnam War files by Peter Arnett, winner of the Pulitzer Prize for journalism
Peter Arnett talking about news reporting in Vietnam in  60s.

Kiss your app goodbye: the fragility of data journalism


Panel: Kiss your app goodbye: the fragility of data journalism
Featuring Meredith Broussard, New York University; Regina Lee Roberts, Stanford University; Ben Welsh, The Los Angeles Times; moderator Martin Klein, Ph.D., Los Alamos National Laboratory

The future of the past: modernizing The New York Times archive


Panel: The future of the past: modernizing The New York Times archive
Featuring The New York Times Technology Team: Evan Sandhaus, Jane Cotler and Sophia Van Valkenburg; moderated by Edward McCain, RJI and MU Libraries

Lightning Rounds: Six Presenters



Lightning rounds (in two parts)
Six + one presenters: Jefferson Bailey, Terry Britt, Katherine Boss (and team), Cynthia Joyce, Mark Graham, Jennifer Younger and Kalev Leetaru
1: Jefferson Bailey, Internet Archive, “Supporting Data-Driven Research using News-Related Web Archives” 2: Terry Britt, University of Missouri, “News archives as cornerstones of collective memory” 3: Katherine Boss, Meredith Broussard and Eva Revear, New York University: “Challenges facing preservation of born-digital news applications” 4: Cynthia Joyce, University of Mississippi, “Keyword ‘Katrina’: Re-collecting the unsearchable past” 5: Mark Graham, Internet Archive/The Wayback Machine, “Archiving news at the Internet Archive” 6: Jennifer Younger, Catholic Research Resources Alliance: “Digital Preservation, Aggregated, Collaborative, Catholic” 7. Kalev Leetaru, senior fellow, The George Washington University and founder of the GDELT Project: A Look Inside The World’s Largest Initiative To Understand And Archive The World’s News

Technology and Community


Presentation: Technology and community: Why we need partners, collaborators, and friends by Kate Zwaard, Library of Congress

Breakout: Working with CMS


Working with CMS, led by Eric Weig, University of Kentucky

Alignment and reciprocity


Alignment & reciprocity by Katherine Skinner, Ph.D., executive director, the Educopia Institute

Closing remarks


Closing remarks by Edward McCain, RJI and MU Libraries and Todd Grappone, associate university librarian, UCLA

Live Tweet Archive

Reminder: In many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. Below I’ve changed the attribution of one or two tweets to reflect the proper person(s). Fore convenience, I’ve also added a few hyperlinks to useful resources after the fact that didn’t have time to make the original tweets. I’ve attached .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
/hovercards.js

Peter Arnett:

Condoms were required issue in Vietnam–we used them to waterproof film containers in the field.

Do not stay close to the head of a column, medics, or radiomen. #warreportingadvice

I told the AP I would undertake the task of destroying all the reporters’ files from the war.

Instead the AP files moved around with me.

Eventually the 10 trunks of material went back to the AP when they hired a brilliant archivist.

“The negatives can outweigh the positives when you’re in trouble.”

Edward McCain:

Our first panel:Kiss your app goodbye: the fragility of data jornalism

Meredith Broussard:

I teach data journalism at NYU

A news app is not what you’d install on your phone

Dollars for Docs is a good example of a news app

A news app is something that allows the user to put themself into the story.

Often there are three CMSs: web, print, and video.

News apps don’t live in any of the CMSs. They’re bespoke and live on a separate data server.

This has implications for crawlers which can’t handle them well.

Then how do we save news apps? We’re looking at examples and then generalizing.

Everyblock.com was a good example based on chicagocrime and later bought by NBC and shut down.

What?! The internet isn’t forever? Databases need to be save differently than web pages.

Reprozip was developed by NYU Center for Data and we’re using it to save the code, data, and environment.

Ben Welsh:

My slides will be at http://bit.ly/frameworkfix. I work on the data desk @LATimes

We make apps that serve our audience.

We also make internal tools that empower the newsroom.

We also use our nerdy skills to do cool things.

Most of us aren’t good programmers, we “cheat” by using frameworks.

Frameworks do a lot of basic things for you, so you don’t have to know how to do it yourself.

Archiving tools often aren’t built into these frameworks.

Instagram, Pinterest, Mozilla, and the LA Times use django as our framework.

Memento for WordPress is a great way to archive pages.

We must do more. We need archiving baked into the systems from the start.

Slides at http://bit.ly/frameworkfix

Regina Roberts:

Got data? I’m a librarian at Stanford University.

I’ll mention Christine Borgman’s book Big Data, Little Data, No data.

Journalists are great data liberators: FOIA requests, cleaning data, visualizing, getting stories out of data.

But what happens to the data once the story is published?

BLDR: Big Local Digital Repository, an open repository for sharing open data.

Solutions that exist: Hydra at http://projecthydra.org or Open ICPSR www.openicpsr.org

For metadata: www.ddialliance.org, RDF, International Image Interoperability Framework (iiif) and MODS

Martin Klein:

We’ll open up for questions.

Audience Question:

What’s more important: obey copyright laws or preserving the content?

Regina Roberts:

The new creative commons licenses are very helpful, but we have to be attentive to many issues.

Perhaps archiving it and embargoing for later?

Ben Welsh:

Saving the published work is more important to me, and the rest of the byproduct is gravy.

Evan Sandhaus:

I work for the New York Times, you may have heard of it…

Doing a quick demo of Times Machine from @NYTimes

Sophia van Valkenburg:

Talking about modernizing the born-digital legacy content.

Our problem was how to make an article from 2004 look like it had been published today.

There were 100’s of thousands of articles missing.

There was no one definitive list of missing articles.

Outlining the workflow for reconciling the archive XML and the definitive list of URLs for conversion.

It’s important to use more than one source for building an archive.

Jane Cotler:

I’m going to talk about all of “the little things” that came up along the way..

Article Matching: Fusion – How to convert print XML with web HTML that was scraped.

Primarily, we looked at common phrases between the corpus of the two different data sets.

We prioritized the print data over the digital data.

We maintain a system called switchboard that redirects from old URLs to the new ones to prevent link rot.

The case of the missing sections: some sections of the content were blank and not transcribed.

We made the decision of taking out data we had in lieu of making a better user experience for missing sections.

In the future, we’d also like to put photos back into the articles.

Evan Sandhaus:

Modernizing and archiving the @NYTimes archives is an ongoing challenge.

Edward McCain:

Can you discuss the decision to go with a more modern interface rather than a traditional archive of how it looked?

Evan Sandhaus:

Some of the decision was to get the data into an accessible format for modern users.

We do need to continue work on preserving the original experience.

Edward McCain:

Is there a way to distinguish between the print version and the online versions in the archive?

Audience Question:

Could a researcher do work on the entire corpora? Is it available for subscription?

Edward McCain:

We do have a sub-section of data availalbe, but don’t have it prior to 1960.

Audience Question:

Have you documented the process you’ve used on this preservation project?

Sophia van Valkenburg:

We did save all of the code for the project within GitHub.

Jane Cotler:

We do have meeting notes which provide some documentation, though they’re not thorough.

ChrisAldrich:

Oh dear. Of roughly 1,155 tweets I counted about #DtMH2016 in the last week, roughly 25% came from me. #noisy

Opensource tool I had mentioned to several: @wallabagapp A self-hostable application for saving web pages https://www.wallabag.org

Notes from Day 2 of Dodging the Memory Hole: Saving Online News | Friday, October 14, 2016 was originally published on Chris Aldrich