After one has become aware of the realisation that one's childhood wasn't was as trouble-free and happy as one may have assumed, it is this realisation that provides the mirror in which one can finally reflect on one's life so far. Especially the troubles one had and possibly still has, such as the trouble to blend in with other adults.
Thinking about it, one wonders just what it is that makes one so different. I mean, sure, one's past has been rather traumatic, and nobody expects someone who has been afflicted with PTSD to lead a perfectly normal life. Yet this is more than just trauma like that which a war veteran or a victim of violent crime might have. They can actually remember a life in which things were more or less normal, before the traumatic event.
Part of the reflection process when coming to terms with childhood abuse is the acknowledgement that the monologue which one kept repeating when tasked to think about one's childhood, along with some choice memories that would fit with a carefree childhood alibi, that all of this was just part of protecting oneself from the truth. That in reality, nothing about one's childhood was happy or carefree. At least not until the thing happened that apparently shattered one's mind at a young age.
When my mother described to me the drastic change that I underwent very suddenly around the age of five years old, transforming from a happy, carefree child into a withdrawn child who rejected any form of physical contact... it is now that I can look back on the years between now and then, and see that little has changed. I made this coping mechanism part of my new 'self', and I still am that traumatised child.
It isn't just physical touch that I find repulsive or terrifying to this day, though it is the most obvious sign. Whatever it was that adults did to me at that young age, it appears to have instilled such a strong and fundamental sense of repulsion and fear of anything to do with 'adults' that trying to grasp the full scope of it is impossible.
I think that adults as a whole have made a pretty miserable society, in which nobody can agree on anything, where help is often nowhere to be found and the wealthy freely exploit the less wealthy. I can see that an individual's life has little value in society and that for all but the wealthy it is merely an exercise in self-exploitation at the behest of others until one's last breath. The lucky ones will not have to deal with being exploited as well.
I cannot forgive the adults who made me feel this way. Who took away most of my childhood and ruined my life in so many ways. I just wish that I could remember more than these half-remembered glimpses and sensations of intense terror and panic. Who it was, and why.
I don't feel like I am a complete human being at this point. Having been emotionally and psychologically withdrawn for so many years obviously didn't help. It's only recently that I am beginning to regain a sense of self, and discovering that truly a lot of time has passed since I was that five year old kid.
Yet it is with absolute terror that I find that my view of society and this world isn't changing along with it.
Everything about society is terrifying, unforgiving, cold, harsh, unhealthy, deceiving and delusional. The only escapes that I can see are those where one can flee into the realm of logic and reason, like that of science and technology, or into innocent fun like that of cutesy video games. I feel that intellectually there is a lot in this world that I can and would love to learn and understand. I can see that there's a lot of beauty and a true sense of wonder, yet this too lies beyond the realm of human society. Human society only faces inwards and only concerns itself with humans and laws and regulations and conflicts between humans. It stumbles around blindly.
For the past decades, the realm of science and technology has been where I have been hiding, mentally. Here there are none of the requirements of human society. Only the willingness and capacity to be curious and learn.
What terrifies me about becoming an 'adult'? Part of it is simply the terror of becoming like all of the adults who have harmed and hurt me over the years. The mere thought of accepting any part of what they are and stand for is truly repulsive. It feels as though I would somehow approve of their actions towards me, by becoming more like them.
That is the core of it all, I guess. Inside of me, I can still intensely feel the pain and terror of the child. In the way that I react to situations, and the mindsets that I slip into when my post-traumatic stress disorder gets triggered feel as if regressing to this terrified child. Far too often, adults today still manage to hit exactly on those trigger points, where their actions, words and so on can only be interpreted by my mind as being threatening. Threatening in the way that shattered my mind once, years ago.
It makes one wonder whether there truly is a way to deal with, or even give childhood trauma a place.
Maya
Tuesday, 31 March 2020
Friday, 27 March 2020
The fickle world of software development
At the beginning of this month, I published a blog post titled 'NymphCast: A casual attempt at an open alternative to ChromeCast and kin' [1]. This got picked up a number of tech websites [2][3][4][5][6] over the past weeks. Explaining to people just what the NymphCast project [7] is about, what it can do right now and perhaps most importantly getting feedback about how people perceive the project based on the information that is out there right now, all of that has been quite enlightening.
As I wrote that original blog post, I was still in the process of finishing up the implementation of the basic features of the initial (v0.1) release. Meaning that it was a definite prototype, with the scaffolding obscuring much of what was already there, but also what wasn't there yet. Things like seeking through files being streamed, for example. Essentially with the way the project got presented in those articles, it sounded as if it was a ready to use solution folk could just slap onto their Raspberry Pi compute things and be off to the races.
Suffice it to say that this caused a bit of confusion in the commentary to those articles.
I have since put a product page with documentation [8] over at the Nyanko website, with information on how to develop these mysterious 'NymphCast apps' using the AngelScript programming language. That partially addresses the point of where in blazen's name the documentation for any of it is. The other points raised were about its feature set and what it will support in the future, as well as a host of related questions.
The tricky thing there is that as I had formulated it in my blog post, the project is first and foremost a fun hobby project of mine to scratch this itch of there not being a proper open source, cross-platform solution that is like ChromeCast or AirPlay. Of course I am open to extending the feature set to add things that do not necessarily scratch my own itches. A lot of features that got suggested are interesting from a development and technical point of view, so I see them as interesting challenges to learn from and grow as a developer.
Yet things are never quite that rosy.
Even though I implemented the last and most invasive feature into NymphCast a few weeks ago (full seeking support), this only meant that the rule of thumb that I use for estimating required development time kicked in. This rule is the '10/90/90' rule, as in the workload in any (software) project is 10% planning, 90% implementing (100%, or what usually gets scheduled) and 90% testing/debugging (making a total of 190%, for those keeping score). A rough estimate on development time for NymphCast is that I spent about a month (full-time) on it. This means that with testing time having kicked in, I'll have a few (full-time) weeks of testing ahead of me, including the Alpha, Beta and Release Candidate phases.
Here of course one can cheat, as is common in (commercial) software development; instead of accepting the 10/90/90 rule, it gets instead crammed into something like '10/90/10', with just enough testing done that nothing is all too obviously broken for the first users. This often leads to the other 80% of testing being combined into subsequent development cycles. If I were to follow this approach, the v0.2 development version of NymphCast would involve a lot of catching up on bugs and issues that lived on due to the shortened v0.1 test cycle.
It's quite possible that these latent bugs and issues from v0.1 would interfere with v0.2 development, even slowing it down, as it would often be unclear whether a new feature or change created a regression, or whether one merely tripped over a hidden bug. This is why I'm no big fan of shortened test cycles. Of course, this does mean that you end up disappointing end-users (or heavens forbid, customers). Instead, I prefer to just grind my way through testing, torturing the software using countless (intended and unintended) usage scenarios to shake out bugs. Let me just say that I fixed a whole stack of issues between the first (alpha1) and second (alpha2) alpha releases.
Here of course the consideration is that end-users ultimately want to have something that Just Works (tm), with only a certain tolerance for bugs and issues. Is having them wait longer worse than giving them a broken product?
All of this is more or less a long-winded way to say that the rush of popularity (nearly 50,000 views on that one blog post, and over 1,100 stars on the Github project) was unexpected and perhaps somewhat premature. Possibly.
Though I very much appreciate the attention, it does put me into a bit of a pickle. On one hand there's now this sudden popularity, that will likely die down if the project doesn't deliver, yet going into full-time development mode to get the v0.1 release and beyond out of the door isn't feasible either. It's, after all, a hobby project. Hobby projects do not pay the bills, or put food on the table.
It seems almost as if the only real option is to let the project slip back into obscurity, while I slowly work away on it over the next months and perhaps years, until it's ready(-ish) for prime time.
Time will tell, I guess.
Maya
[1] https://mayaposch.blogspot.com/2020/03/nymphcast-casual-attempt-at-open.html
[2] https://tweakers.net/geek/164126/ontwikkelaar-maakt-open-source-chromecast-alternatief.html
[3] https://www.reddit.com/r/linux/comments/fhdnav/nymphcast_an_opensource_alternative_to_chromecast/
[4] https://www.tomshardware.com/news/using-raspberry-pi-like-a-chromecast-open-source-nymphcast-project-makes-it-happen
[5] https://www.kitguru.net/lifestyle/mobile/accessories/christopher-nohall/nymphcast-turns-your-raspberry-pi-into-a-streaming-device/
[6] https://news.ycombinator.com/item?id=22457351
[7] https://github.com/MayaPosch/NymphCast
[8] http://nyanko.ws/nymphcast.php
As I wrote that original blog post, I was still in the process of finishing up the implementation of the basic features of the initial (v0.1) release. Meaning that it was a definite prototype, with the scaffolding obscuring much of what was already there, but also what wasn't there yet. Things like seeking through files being streamed, for example. Essentially with the way the project got presented in those articles, it sounded as if it was a ready to use solution folk could just slap onto their Raspberry Pi compute things and be off to the races.
Suffice it to say that this caused a bit of confusion in the commentary to those articles.
I have since put a product page with documentation [8] over at the Nyanko website, with information on how to develop these mysterious 'NymphCast apps' using the AngelScript programming language. That partially addresses the point of where in blazen's name the documentation for any of it is. The other points raised were about its feature set and what it will support in the future, as well as a host of related questions.
The tricky thing there is that as I had formulated it in my blog post, the project is first and foremost a fun hobby project of mine to scratch this itch of there not being a proper open source, cross-platform solution that is like ChromeCast or AirPlay. Of course I am open to extending the feature set to add things that do not necessarily scratch my own itches. A lot of features that got suggested are interesting from a development and technical point of view, so I see them as interesting challenges to learn from and grow as a developer.
Yet things are never quite that rosy.
Even though I implemented the last and most invasive feature into NymphCast a few weeks ago (full seeking support), this only meant that the rule of thumb that I use for estimating required development time kicked in. This rule is the '10/90/90' rule, as in the workload in any (software) project is 10% planning, 90% implementing (100%, or what usually gets scheduled) and 90% testing/debugging (making a total of 190%, for those keeping score). A rough estimate on development time for NymphCast is that I spent about a month (full-time) on it. This means that with testing time having kicked in, I'll have a few (full-time) weeks of testing ahead of me, including the Alpha, Beta and Release Candidate phases.
Here of course one can cheat, as is common in (commercial) software development; instead of accepting the 10/90/90 rule, it gets instead crammed into something like '10/90/10', with just enough testing done that nothing is all too obviously broken for the first users. This often leads to the other 80% of testing being combined into subsequent development cycles. If I were to follow this approach, the v0.2 development version of NymphCast would involve a lot of catching up on bugs and issues that lived on due to the shortened v0.1 test cycle.
It's quite possible that these latent bugs and issues from v0.1 would interfere with v0.2 development, even slowing it down, as it would often be unclear whether a new feature or change created a regression, or whether one merely tripped over a hidden bug. This is why I'm no big fan of shortened test cycles. Of course, this does mean that you end up disappointing end-users (or heavens forbid, customers). Instead, I prefer to just grind my way through testing, torturing the software using countless (intended and unintended) usage scenarios to shake out bugs. Let me just say that I fixed a whole stack of issues between the first (alpha1) and second (alpha2) alpha releases.
Here of course the consideration is that end-users ultimately want to have something that Just Works (tm), with only a certain tolerance for bugs and issues. Is having them wait longer worse than giving them a broken product?
All of this is more or less a long-winded way to say that the rush of popularity (nearly 50,000 views on that one blog post, and over 1,100 stars on the Github project) was unexpected and perhaps somewhat premature. Possibly.
Though I very much appreciate the attention, it does put me into a bit of a pickle. On one hand there's now this sudden popularity, that will likely die down if the project doesn't deliver, yet going into full-time development mode to get the v0.1 release and beyond out of the door isn't feasible either. It's, after all, a hobby project. Hobby projects do not pay the bills, or put food on the table.
It seems almost as if the only real option is to let the project slip back into obscurity, while I slowly work away on it over the next months and perhaps years, until it's ready(-ish) for prime time.
Time will tell, I guess.
Maya
[1] https://mayaposch.blogspot.com/2020/03/nymphcast-casual-attempt-at-open.html
[2] https://tweakers.net/geek/164126/ontwikkelaar-maakt-open-source-chromecast-alternatief.html
[3] https://www.reddit.com/r/linux/comments/fhdnav/nymphcast_an_opensource_alternative_to_chromecast/
[4] https://www.tomshardware.com/news/using-raspberry-pi-like-a-chromecast-open-source-nymphcast-project-makes-it-happen
[5] https://www.kitguru.net/lifestyle/mobile/accessories/christopher-nohall/nymphcast-turns-your-raspberry-pi-into-a-streaming-device/
[6] https://news.ycombinator.com/item?id=22457351
[7] https://github.com/MayaPosch/NymphCast
[8] http://nyanko.ws/nymphcast.php
Sunday, 8 March 2020
Tuesday, 3 March 2020
DebounceHAT: how to keep switches from killing a Raspberry Pi board
Back in 2018, I found myself volunteering to rework part of a local hackerspace's RFID locking and club status system. Until that point the system had consisted out of a converted ATX power supply and Raspberry Pi single-board computer (SBC) literally dangling from wires, with signal wires soldered onto the general-purpose input/output (GPIO) pins of the SBC. These signal wires connected to two switches, one which detected whether the lock in the door had been engaged, the other to a manual switch using which non-permanently powered outlets in the space could be turned on or off.
Replacing the active loop Python script that constantly polled the two inputs with something more elegant was fairly straightforward [1], employing a C++ solution that used interrupt-based events (ISRs). By using ISRs instead of constant polling, this did lose the 'advantage' of debouncing incoming switch signals through brute force, so instead debouncing had to be re-added in a more elegant fashion.
Here one has two options: a timer-based approach in software, where one registers that 'something' is happening on an input that is connected to a mechanical switch, but waits before reading out the value (low or high) on that input until a certain amount of time has expired and the fluctuating signal from the switch's contacts bouncing against each other has probably settled into the final signal.
The other approach is to do it in hardware, where the fluctuating signal from the mechanical switch passes through a hardware circuit, which essentially smooths it out so that on the software end one can always read the pin out as if it's a purely digital input. I previously covered the basic theory behind this type of circuit on my development blog [2].
This all led to a basic expansion board that could be put on top of the Raspberry Pi's GPIO pins, first in a basic (prototype) version, as seen on the left hand side in the below image, which later got reworked into the version that can be seen on the right. As one may note, the latter board has a lot more components on it, which leads to the next part of this story.
You see, with the basic debounce circuit, consisting out of an RC (resistor-capacitor) circuit and an inverse Schmitt trigger chip, the debouncing part worked great. The noisy signal from the switch's contacts bouncing against each other got practically eliminated by the RC filter, with any remaining noise that might have survived the RC filter getting dealt with by the Schmitt trigger, courtesy of its large dead-zone in between the trigger points.
Unfortunately, during the first winter period it was found that touching the door handle of the door that had the first switch connected it would disable the Raspberry Pi until restarted. A bit of research showed this to be due to the electrostatic discharge (ESD) from the person touching the (metal) door handle. This discharge would find its way from the door handle to the metal parts inside the frame, then to the metal parts of the micro switch that had been embedded into the frame. From there it would travel along the signal wire to the Raspberry Pi and zap the system.
The same effect could be observed when there was a surge in the 230V wiring running alongside this signal wire, inducing a current via electromagnetic coupling. Fortunately this current was low enough that it would only cause false positive trigger events, but it's conceivable that with the proper EM source, strong enough voltages and currents could be generated that would damage the connected hardware. As the Raspberry Pi and similar boards can only accept a voltage of 3.3V on their pins and can sink only a few milliamperes without damage, this would likely cause permanent damage in the long term.
Obviously a solution was needed to fix this.
This is where the second board was conceived: in order to completely isolate the SBC from its environment and especially the signal wires, it was decided to use opto-isolators and an isolated DC-DC voltage supply to provide a voltage source for use with the switches. This way, the signal wires were left completely electrically isolated, with any incoming signals being transferred via the opto-isolator's LED and photo diode instead of via a copper wire. In addition, spark gaps were added to the board and provision for an earth wire so that any surge would be safely carried away.
While a good start, a friend convinced me to take a look at further improvements and together we sat down to make more improvements. This is the board that is now featured on the Github repository [3] and which will also soon be featured in a CrowdSupply crowdfunding campaign [4] so that people can get their own.
Like its predecessor, this board is an official Raspberry Pi HAT, with the requisite EEPROM with configuration settings. Changes include better channel separation and surge protection on the 6 input channels, an isolated DC-DC supply with full class B EM compliance, isolation slots, improved spark gaps and about 4-5 kV AC surge isolation. All 6 input channels are rated at 3-12V. It also provides input protection when the Raspberry Pi is provided with power via this DebounceHAT board.
A number of prototype boards have been assembled of this current version for more testing the coming time while hunting down a company for assembly of the production boards.
Hopefully this board will save a lot of people from having to jump through the same hoops and painful discoveries that I did :)
Maya
[1] https://github.com/MayaPosch/ClubStatusService
[2] https://mayaposch.wordpress.com/2018/06/26/designing-an-rc-debounce-circuit/
[3] https://github.com/MayaPosch/DebounceHat
[4] https://www.crowdsupply.com/maya-posch/debounce-hat
Replacing the active loop Python script that constantly polled the two inputs with something more elegant was fairly straightforward [1], employing a C++ solution that used interrupt-based events (ISRs). By using ISRs instead of constant polling, this did lose the 'advantage' of debouncing incoming switch signals through brute force, so instead debouncing had to be re-added in a more elegant fashion.
Here one has two options: a timer-based approach in software, where one registers that 'something' is happening on an input that is connected to a mechanical switch, but waits before reading out the value (low or high) on that input until a certain amount of time has expired and the fluctuating signal from the switch's contacts bouncing against each other has probably settled into the final signal.
The other approach is to do it in hardware, where the fluctuating signal from the mechanical switch passes through a hardware circuit, which essentially smooths it out so that on the software end one can always read the pin out as if it's a purely digital input. I previously covered the basic theory behind this type of circuit on my development blog [2].
This all led to a basic expansion board that could be put on top of the Raspberry Pi's GPIO pins, first in a basic (prototype) version, as seen on the left hand side in the below image, which later got reworked into the version that can be seen on the right. As one may note, the latter board has a lot more components on it, which leads to the next part of this story.
You see, with the basic debounce circuit, consisting out of an RC (resistor-capacitor) circuit and an inverse Schmitt trigger chip, the debouncing part worked great. The noisy signal from the switch's contacts bouncing against each other got practically eliminated by the RC filter, with any remaining noise that might have survived the RC filter getting dealt with by the Schmitt trigger, courtesy of its large dead-zone in between the trigger points.
Unfortunately, during the first winter period it was found that touching the door handle of the door that had the first switch connected it would disable the Raspberry Pi until restarted. A bit of research showed this to be due to the electrostatic discharge (ESD) from the person touching the (metal) door handle. This discharge would find its way from the door handle to the metal parts inside the frame, then to the metal parts of the micro switch that had been embedded into the frame. From there it would travel along the signal wire to the Raspberry Pi and zap the system.
The same effect could be observed when there was a surge in the 230V wiring running alongside this signal wire, inducing a current via electromagnetic coupling. Fortunately this current was low enough that it would only cause false positive trigger events, but it's conceivable that with the proper EM source, strong enough voltages and currents could be generated that would damage the connected hardware. As the Raspberry Pi and similar boards can only accept a voltage of 3.3V on their pins and can sink only a few milliamperes without damage, this would likely cause permanent damage in the long term.
Obviously a solution was needed to fix this.
This is where the second board was conceived: in order to completely isolate the SBC from its environment and especially the signal wires, it was decided to use opto-isolators and an isolated DC-DC voltage supply to provide a voltage source for use with the switches. This way, the signal wires were left completely electrically isolated, with any incoming signals being transferred via the opto-isolator's LED and photo diode instead of via a copper wire. In addition, spark gaps were added to the board and provision for an earth wire so that any surge would be safely carried away.
While a good start, a friend convinced me to take a look at further improvements and together we sat down to make more improvements. This is the board that is now featured on the Github repository [3] and which will also soon be featured in a CrowdSupply crowdfunding campaign [4] so that people can get their own.
Like its predecessor, this board is an official Raspberry Pi HAT, with the requisite EEPROM with configuration settings. Changes include better channel separation and surge protection on the 6 input channels, an isolated DC-DC supply with full class B EM compliance, isolation slots, improved spark gaps and about 4-5 kV AC surge isolation. All 6 input channels are rated at 3-12V. It also provides input protection when the Raspberry Pi is provided with power via this DebounceHAT board.
A number of prototype boards have been assembled of this current version for more testing the coming time while hunting down a company for assembly of the production boards.
Hopefully this board will save a lot of people from having to jump through the same hoops and painful discoveries that I did :)
Maya
[1] https://github.com/MayaPosch/ClubStatusService
[2] https://mayaposch.wordpress.com/2018/06/26/designing-an-rc-debounce-circuit/
[3] https://github.com/MayaPosch/DebounceHat
[4] https://www.crowdsupply.com/maya-posch/debounce-hat
Sunday, 1 March 2020
NymphCast: a casual attempt at an open alternative to ChromeCast and kin
For the past half year or thereabouts I have been working on a little project of mine that I call 'NymphCast' [1]. The initial idea for it originated at the beginning of 2019, when I found myself perplexed at the usability and compatibility issues that exist with streaming solutions like ChromeCast (Google proprietary) and AirPlay (Apple proprietary). Also how this was not improved much with PulseAudio (Linux-exclusive). It should be possible to have an open standard that would just work on any platform for streaming audio and video across a network.
First step was scraping together existing software solutions for handling the boring parts like shovelling bytes across the network and decoding and playing back audio and video. For the network communication part between server and client I was happy to use my own NymphRPC [2] remote procedure call library, as this provided the required functionality to transfer data between client and server in a light-weight library, which I also know to have been used in production environments. For the video and audio decoding, and the playback I settled on Ffmpeg [3] with LibSDL [4]. I also tried the gstreamer and LibVLC libraries, but could not make either of them work for the project.
It's interesting to essentially rebuild an existing system. The requirements for NymphCast were rather obvious: same as for ChromeCast, Roku and Amazon's dongle. That 'just' left implementing it. Playing back audio and video from a memory buffer and keeping that memory buffer filled while also allowing for efficient seek operations on the remote file gives one a lot of insight in where the bottlenecks lie on a computer and a network, but is mostly tedious work. Most of the work probably went into the other aspect of ChromeCast and kin: custom apps that add functionality, such as streaming from online services like YouTube, Netflix and SoundCloud.
Since ChromeCast's server solution (what runs on a ChromeCast dongle) is essentially a Chrome browser instance, these ChromeCast apps are simply HTML pages with JavaScript that get loaded from a remote server. Because NymphCast is a native C++ application it's free to use whatever approach it wants for NymphCast apps. Here I wanted a scripting language that's easy to integrate with C++ and both easy and powerful enough to be used for whatever such a custom app might require. Having embedded both Lua and Python in the past, I wanted something less clunky and ideally statically typed, which led me to AngelScript [5].
AngelScript has essentially C++ syntax and supports most C++ concepts directly, allowing it to be embedded in a C++ application without any jumping through hoops and with no overhead. What it also shares with C++ is that it is statically typed, meaning that when the script file is compiled, the AngelScript compiler will perform type and syntax checks, informing you where you made a mistake instead of the runtime throwing an error and bailing out while the app is running. I really appreciated this feature while developing the SoundCloud demonstration app.
As an aside, people have asked me over the past months why I didn't just implement the AirPlay or ChromeCast protocols (both of which have been more or less reverse engineered). The primary reason is that both are proprietary protocols which have been altered by the companies behind them and very likely will be changed again in the future. There is also limited use in supporting those protocols, as one isn't simply going to support ChromeCast apps or such, as these have been cryptographically locked away so unless you'll crack those AES or similar keys, it'd be at best a half-hearted kludge and a worst a massive waste of time and effort.
Considering that I managed to implement a basic SoundCloud NymphCast app within an hour using the public (HTTP) API, it would seem to me that it would be a more productive approach to get NymphCast into a state where I could contact SoundCloud and similar companies on whether they'd want to produce an official NymphCast app (hosted on their own servers) and thus get NymphCast on a level playing field with the competition, rather than acting like a trojan and constantly having to fix things whenever Google or Apple change something on their end. That's not the kind of 'competing' which I am into.
This leads me to the current state of NymphCast: I have used the NymphCast server on Windows, Linux x86 and Linux ARM on Raspberry Pi SBCs and other ARM-based SBCs. The client runs on all desktop platforms and on Android (Qt-based). While I would definitely call it Alpha-level software, with some features such as seeking support still being implemented, I am rapidly running out of missing features to implement. Leaving seeking support as one of the final features to implement for the first release was mostly because it is less essential than stabilising the other features.
One interesting thing which I have found during testing is that even if one were to never want to skip through an audio track or film, seeking support is still needed, as the MP4 container format for one has this nice feature where by default it puts the 'header' with the container details at the end of the file. This means that the player has to first seek to the end of the file, read the 'header', set the configuration, seek back to the file beginning and then start playing.
All taken together, this seeking support and some functions to get real-time playback information in the client are the only things that still need to be implemented before the first NymphCast release goes into Beta, meaning the shaking out of bugs and any other issues that may pop up during testing. Here I want to make it as easy as possible for people to help with testing, by providing an easy way to get NymphCast on a few supported platforms, such as the Raspberry Pi for the server, and for the client desktop platforms as well as Android devices. Compiling the client for iOS smartphones is harder, as this requires one to have a Mac system, which I do not, but this can be solved as well.
So what is it that I want to accomplish with NymphCast? Most of all to have a nice platform that I can use myself for streaming audio and video to any (powered) speakers and displays that I have standing around, along with the extensibility offered by NymphCast apps. I also hope that others will start using it, even adding NymphCast support to their (smartphone) apps. It would be wonderful to see private companies embrace it and release official apps that would allow people to use their services from NymphCast, cutting out proprietary ChromeCast, Airplay, Sonos and various SmartTV platforms.
The open nature of NymphCast (3-clause BSD licenced) is one big benefit, but also the ability to install the server on any Raspberry Pi board or equivalent without any hardware devices having to be produced, shipped, purchased and eventually tossed away, like the dongles for Google and Amazon's solutions, or entire speakers and devices as in the case of Sonos and Roku. NymphCast will work with any general purpose system, whether it is a Raspberry Pi, OrangePi, Intel NUC, some AMD APU-based board or a full-blown gaming PC.
Call me nuts, but I think that it might just be crazy enough to work.
Maya
[1] https://github.com/MayaPosch/NymphCast
[2] https://github.com/MayaPosch/NymphRPC
[3] http://www.ffmpeg.org/
[4] http://www.libsdl.org/
[5] http://www.angelcode.com/angelscript/
[6] https://github.com/MayaPosch/NymphCast/blob/master/src/server/apps/soundcloud/soundcloud.as
First step was scraping together existing software solutions for handling the boring parts like shovelling bytes across the network and decoding and playing back audio and video. For the network communication part between server and client I was happy to use my own NymphRPC [2] remote procedure call library, as this provided the required functionality to transfer data between client and server in a light-weight library, which I also know to have been used in production environments. For the video and audio decoding, and the playback I settled on Ffmpeg [3] with LibSDL [4]. I also tried the gstreamer and LibVLC libraries, but could not make either of them work for the project.
It's interesting to essentially rebuild an existing system. The requirements for NymphCast were rather obvious: same as for ChromeCast, Roku and Amazon's dongle. That 'just' left implementing it. Playing back audio and video from a memory buffer and keeping that memory buffer filled while also allowing for efficient seek operations on the remote file gives one a lot of insight in where the bottlenecks lie on a computer and a network, but is mostly tedious work. Most of the work probably went into the other aspect of ChromeCast and kin: custom apps that add functionality, such as streaming from online services like YouTube, Netflix and SoundCloud.
Since ChromeCast's server solution (what runs on a ChromeCast dongle) is essentially a Chrome browser instance, these ChromeCast apps are simply HTML pages with JavaScript that get loaded from a remote server. Because NymphCast is a native C++ application it's free to use whatever approach it wants for NymphCast apps. Here I wanted a scripting language that's easy to integrate with C++ and both easy and powerful enough to be used for whatever such a custom app might require. Having embedded both Lua and Python in the past, I wanted something less clunky and ideally statically typed, which led me to AngelScript [5].
AngelScript has essentially C++ syntax and supports most C++ concepts directly, allowing it to be embedded in a C++ application without any jumping through hoops and with no overhead. What it also shares with C++ is that it is statically typed, meaning that when the script file is compiled, the AngelScript compiler will perform type and syntax checks, informing you where you made a mistake instead of the runtime throwing an error and bailing out while the app is running. I really appreciated this feature while developing the SoundCloud demonstration app.
As an aside, people have asked me over the past months why I didn't just implement the AirPlay or ChromeCast protocols (both of which have been more or less reverse engineered). The primary reason is that both are proprietary protocols which have been altered by the companies behind them and very likely will be changed again in the future. There is also limited use in supporting those protocols, as one isn't simply going to support ChromeCast apps or such, as these have been cryptographically locked away so unless you'll crack those AES or similar keys, it'd be at best a half-hearted kludge and a worst a massive waste of time and effort.
Considering that I managed to implement a basic SoundCloud NymphCast app within an hour using the public (HTTP) API, it would seem to me that it would be a more productive approach to get NymphCast into a state where I could contact SoundCloud and similar companies on whether they'd want to produce an official NymphCast app (hosted on their own servers) and thus get NymphCast on a level playing field with the competition, rather than acting like a trojan and constantly having to fix things whenever Google or Apple change something on their end. That's not the kind of 'competing' which I am into.
This leads me to the current state of NymphCast: I have used the NymphCast server on Windows, Linux x86 and Linux ARM on Raspberry Pi SBCs and other ARM-based SBCs. The client runs on all desktop platforms and on Android (Qt-based). While I would definitely call it Alpha-level software, with some features such as seeking support still being implemented, I am rapidly running out of missing features to implement. Leaving seeking support as one of the final features to implement for the first release was mostly because it is less essential than stabilising the other features.
One interesting thing which I have found during testing is that even if one were to never want to skip through an audio track or film, seeking support is still needed, as the MP4 container format for one has this nice feature where by default it puts the 'header' with the container details at the end of the file. This means that the player has to first seek to the end of the file, read the 'header', set the configuration, seek back to the file beginning and then start playing.
All taken together, this seeking support and some functions to get real-time playback information in the client are the only things that still need to be implemented before the first NymphCast release goes into Beta, meaning the shaking out of bugs and any other issues that may pop up during testing. Here I want to make it as easy as possible for people to help with testing, by providing an easy way to get NymphCast on a few supported platforms, such as the Raspberry Pi for the server, and for the client desktop platforms as well as Android devices. Compiling the client for iOS smartphones is harder, as this requires one to have a Mac system, which I do not, but this can be solved as well.
So what is it that I want to accomplish with NymphCast? Most of all to have a nice platform that I can use myself for streaming audio and video to any (powered) speakers and displays that I have standing around, along with the extensibility offered by NymphCast apps. I also hope that others will start using it, even adding NymphCast support to their (smartphone) apps. It would be wonderful to see private companies embrace it and release official apps that would allow people to use their services from NymphCast, cutting out proprietary ChromeCast, Airplay, Sonos and various SmartTV platforms.
The open nature of NymphCast (3-clause BSD licenced) is one big benefit, but also the ability to install the server on any Raspberry Pi board or equivalent without any hardware devices having to be produced, shipped, purchased and eventually tossed away, like the dongles for Google and Amazon's solutions, or entire speakers and devices as in the case of Sonos and Roku. NymphCast will work with any general purpose system, whether it is a Raspberry Pi, OrangePi, Intel NUC, some AMD APU-based board or a full-blown gaming PC.
Call me nuts, but I think that it might just be crazy enough to work.
Maya
[1] https://github.com/MayaPosch/NymphCast
[2] https://github.com/MayaPosch/NymphRPC
[3] http://www.ffmpeg.org/
[4] http://www.libsdl.org/
[5] http://www.angelcode.com/angelscript/
[6] https://github.com/MayaPosch/NymphCast/blob/master/src/server/apps/soundcloud/soundcloud.as
Subscribe to:
Posts (Atom)