It’s been a while since the last update. I fell into one of those cycles where I was always convincing myself I’d post an update just as soon as I made some more progress on the next feature.
- got a repeatable debian packaging process setup for the server side agent
- setup a apt repo on S3 to make the packages available “apt-get”
- created a shell script to automate all the steps in the server side install process
- working cross platform desktop client using the Chromium Embedded Framework
- new security and authentication strategy for the client and server components
The last two items on the list ended turning into gianormous time sinks. I tried so many different solutions for delivering a desktop application based on web technologies until I finally found the best fit for this project. Here are some of the highlights of the journey along with what finally worked and the road ahead.
I immediately got things working on OSX using cocoa/objective-c and the native WebView control. Trying to get a similar WebKit based solution working on Windows just resulted in endless frustration and lots of time wasted. I must have compiled WebKit Cairo at least thirty times.
Then I tried learning C++ so I could get a Qt5 based app working with their embedded WebKit engine. That solution worked but once I started trying to integrate with the desktop I quickly got into deeper C++ waters then I was equipped to handle.
Then I started learning C# so I could use Xamarin/Mono. I’ve always found C# to be a pretty approachable language. I was able to get something working on OSX right away. Unfortunately the Mono toolkit doesn’t have an easily embeddable web browser control that works across OSX and Windows. I tried using a few different chromium embedded projects targetting Windows but kept running into problems at every corner. I installed almost every version of Visual Studio including 2005, 2008 and 2010. If I wasn’t fighting with a bizarre error message in Visual Studio I was battling an endless stream of dll errors trying to compile things.
After all my different attemps to get SSH tunneling working through the different desktop based technologies I mentioned above I learned a lot about the benefits and limiations of different approaches. Having gone through that experience I’ve come away with a renewed perspective on how I’ve been developing the app and how I’d like to move forward.
Right now I’m focused on getting the simplest version I can think of out there that still provides a valuable user experience. I’m leaning towards basic file browsing and editing compabilities then iterating from there.
On the client-side I’ve figured out a way to deliver both a web and desktop version of the app that provides SSH tunneling capabilties. As a result of the way that works there is an interesting opportunity if I every get far enough to make this into a business. With that said I’ve decided keeping the client-code closed source for now keeps the most opportunities open for the future.
On the server side now that the SSH tunnel is working I think I can do away with the requirement of having to install a server side agent up front before you can do anything. When you initially start using the app there will be a certain set of capabilities available without having to install anything on your server. I still believe a server side component is needed for deeper integration. Having looked into Node.js, Python and Ruby over the last few months I think the Ruby community has the best tools around linux system administration.
The server side component will always be open source and I plan on updating that repo as I experiment with a Ruby based agent and/or make changes to the existing Node.js based version.
It was a long winded update but I hope that provided some insight into what I’ve been working on and where I’m trying to take things. As always I love to here your questions and comments. Cheers!