Fedora 22 Beta Release!

Fedora 22 Beta Release Announcement

The Fedora 22 Beta release has arrived, with a preview of the latest free and open source technology under development. Take a peek inside!

What is the Beta release?

The Beta release contains all the exciting features of Fedora 22's editions in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is meant to be feature complete and bears a very strong resemblance to the third and final release. The final release of Fedora 22 is expected in May.

We need your help to make Fedora 22 the best release yet, so please take some time to download and try out the Beta and make sure the things that are important to you are working. If you find a bug, please report it – every bug you uncover (and/or help fix!) is a chance to improve the experience for millions of Fedora users worldwide.

Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as feasible, and your feedback will help improve not only Fedora but Linux and free software on the whole.

Base platform

  • Faster and better dependency management: Yum has been replaced with dnf as the default package manager. Dnf has very similar command line options and configuration files compared to yum but also has several major internal changes including using libsolv in coordination with friends from the openSUSE project for faster and better dependency management. dnf-yum provides automatic redirection from yum to dnf in the command line for compatibility. The classic yum command line tool renamed to yum-deprecated as a transitional step for tools still using it.

Fedora 22 Cloud

The Fedora 22 Cloud Edition builds on the work completed during the Fedora 21 cycle, and brings in a number of improvements that make Fedora 22 a superb choice for running Linux in the cloud.

Ready for the Fedora 22 release, we have:

  • The latest versions of rpm-ostree and rpm-ostree-toolbox. You can even use rpm-ostree-toolbox to generate your own Atomic hosts from a custom set of packages.

  • Introduction of the Atomic command line tool to help manage Linux containers on Atomic Hosts and update Atomic Hosts.

Fedora 22 Server

Fedora 22 Server Edition brings several changes that will improve Fedora for use as a server in your environment.

  • Database Server Role: Fedora 21 introduced Rolekit, a daemon for Linux systems that provides a stable D-Bus interface to manage deployment of server roles. The Fedora 22 release adds onto that work with a database server role based on PostgreSQL.

  • Cockpit Updates: The Cockpit Web-based management application has been updated to the latest upstream release which adds many new features as well as a modular design for adding new functionality.

  • XFS as default filesystem. XFS scales better for servers and can handle higher storage capacity and we have made it the default filesystem for Fedora 22 server users. Other filesystems including Ext4 will continue to be supported and the ability to choose them have been retained.

Fedora 22 Workstation

As always, Fedora carries a number of improvements to make life better for its desktop users and developers! Here's some of the goodness you'll get in Fedora 22 Workstation edition.

Enhancements:

  • The GNOME Shell notification system has been redesigned and subsumed into the calendar widget.
  • The Terminal now notifies you when a long running job completes.
  • The login screen now uses Wayland by default with automatic fallback to Xorg when necessary. This is a transitional step towards replacing Xorg with Wayland by default in the next release and should have no user visible difference.
  • Installation of GStreamer codecs, fonts, and certain document types is now handled by Software, instead of gnome-packagekit.
  • The Automatic Bug Reporting Tool (ABRT) now features better notifications, and uses the privacy control panel in GNOME to control information sent.

Appearance:

  • The Nautilus file manager has been improved to use GActions, from the deprecated GtkAction APIs, for a better, more consistent experience.
  • The GNOME Shell has a refreshed theme for better usability.
  • The Qt/Adwaita theme is now code complete, and Qt notifications have been improved for smoother experience using Qt-based apps in Workstation.

Under the covers:

  • Consistent input handling for graphical applications is provided using libinput library which is now used for both X11 and Wayland.

Spins

Fedora spins are alternative versions of Fedora, tailored for various types of users via hand-picked application sets or customizations. You can browse all of the available spins via http://spins.fedoraproject.org. Some of the popular ones include:

Fedora 22 KDE Plasma spin

Plasma 5, the successor to KDE Plasma 4, is now the default workspace in the Fedora KDE spin. It has a new theme called Breeze, which has cleaner visuals and better readability, improves certain work-flows and provides overall more consistent and polished interface. Changes under the hood include switch to Qt 5 and KDE Frameworks 5 and migration to fully hardware-accelerated graphics stack based on OpenGL(ES).

Fedora 22 Xfce spin

The Xfce spin has been updated to Xfce 4.12. This release has an enormous number of improvements, including HiDPI support, improvements to window tiling, support for Gtk3 plugins, and many improvements for multi-monitor support.

Issues and Details

This is an Beta release. As such, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on freenode.

As testing progresses, common issues are tracked on the Common F22 Bugs page:

https://fedoraproject.org/wiki/Common_F22_bugs

Roadmap

While Fedora 22 is still under active development, we have a number of new features developed in parallel for Fedora 23 as well. While all of these features are work in progress and the plans have not been finalized, we want to highlight a few major changes expected and invite your early testing and feedback.

  • Wayland by default for Fedora 23 Workstation. XWayland will continue to be provided for compatibility with applications using X.

  • Python 3 by default for Fedora 23 Workstation: While most of the default applications are already using Python 3 in Fedora 22, Fedora 23 Workstation will only include Python 3 by default. Python 2 will continue to be included in the repositories.

  • A Vagrant image for Fedora 23 Atomic Host and Cloud Images. We're supplying Vagrant boxes that work with KVM or VirtualBox, so users on Fedora will be able to easily consume the Vagrant images with KVM, and users on Mac OS X or Windows can use the VirtualBox image.

For tips on reporting a bug effectively, read "how to file a bug report":

https://fedoraproject.org/wiki/How_to_file_a_bug_report

Release Schedule

The full release schedule is available on the Fedora wiki. The current schedule calls for a final release in the end of May.

https://fedoraproject.org/wiki/Releases/22/Schedule

These dates are subject to change, pending any major bugs or issues found during the testing process.

NewContributorWat

One of my students tagged me in an issue on GitHub about helping new contributors during the National Day of Civic Hacking. After spending some time writing the comment, I decided to repost it here :)

Orig thread here: https://github.com/18F/18f.gsa.gov/issues/668 reply here: https://github.com/18F/18f.gsa.gov/issues/668#issuecomment-90155612

In the past, many local event organizers have had agencies reach out to them directly and offer to partner on specific projects and initiatives (our local events in Rochester have featured challenges and speakers from the EPA, for example.) I would recommend reaching out to the national organizers, and seeing if you can get a list of cities/events that are not already partnered with a federal agency, or see if they will put your projects out on blast during the next organizer's call.

Either way, the most important thing for getting new contributions, IMHO, is to be sure you have *clear* action items, that are surmountable in the time of the event, with *dedicated* upstream mentors ready to synchronously provide feedback. That sounded kinda buzzword-y, so:

  1. Clear action item(s) (FIX #1337: CSS Bug on http://github.com/18f/18f.gsa.gov/issues/668)
  2. Clear documentation (README with instructions for getting stack up and running, styleguides, etc...)
  3. Person in IRC/Chat actively answering questions from contributors, and ideally hacking with them.

SecondMuse historically does a great job vetting the "problems" that agencies come up with, so working with them will likely help with that first bullet point.

There is *nothing* worse than spending an entire hackathon trying to get "to the starting blocks" and failing to get a stack just up and running. It is demoralizing, and makes new contributors very discouraged. Be sure that whatever contributions you are looking to garner have stacks that can be trivially installed on Linux/Mac/Windows. (i.e. - shipping a requirements.txt or setup.py with your python project, or even better, distributing to http://pypi.python.org for easy installation.)

Having that mentor available to kick down blockers and vgrep tracebacks is the difference between a new contributor spending 3 hours hunting down an error, or a mentor providing that 'obvious-to-them-seen-it-a-million-times-one-liner-fix' in 3 minutes. If you can get mentor that can commit to the *entire* event, that is a super amazing morale boost for new contributors. There is a certain magic in looking in channel or around room and seeing upstream hacking right alongside you in the trenches deep into the wee hours of the morning.

newfangledjstoolschains: Part I

I stumbled upon semantic-ui whilst surfing, and immediately became intrigued. It made sense, and it looked great! I had to try it for my new static blog!

Sure, I could just generate the source for the widgets and css I wanted from their docs, and hand-copied it into my various static folders, but instead, I wanted to attempt to employ the power of build. It has been a long while, but I'm slowly remembering how to wield themtharnewfangledjstoolschains, aka Gulp, and bower, and npm.

Gulp and bower in particular are quite useful. Bower is like pip install for javascript-y libraries. But before using bower, we gotta install node. If there is one thing that I know I love, it is installing random javascript code from the internet onto my machine globally. Everytime I see things like this in a README file, I get very very sad:

$ sudo npm install -g totally_legit_js_library
$ sudo pip install totally_legit_python_library

Please, don't sudo install the things, and certainly not globally, without taking a moment to think if that is desirable or necessary. Don't get me wrong, I sure don't grok all the way to the bottom of every stack I deploy, but this is exactly why I try not to give any-ole stack the ability to run as root.

On top of that, I've been taught you should *almost* always use a virtualenv, or other safe and somewhat isolated micro-universe far away from your system packages.

There are a number of solutions, but my favorite one thusfar has been installing into a python virtualenv! One time at a hackathon, I blindly trusted a teamate who encouraged me to just curl and ./ some shell script off of a website somewhere to get nvm set up, which I begrudgingly did, but have since discontinued the practice of. Here is what I've been doing instead:

https://lincolnloop.com/blog/installing-nodejs-and-npm-python-virtualenv/

Yes, it takes longer, but there is something so satisfying about building from source. I found that after I got node installed, I could npm install -g all-the-things that I needed, and those would be conveniently located within a python virtualenv that I'd likely be using to server up the static content anways.

bower.io
gulpjs.com

So far, so good! I've got a working js toolchain, with my desired deps installed! You can find the initial commits here: https://github.com/decause/cardsite

Part II Goals

  1. Get node and npm install on my local machine without requiring root
  2. Get a working bower.json package to install the things
  3. Get a working gulpfile.js package to move the installed things
  4. Get a working nikola deploy workflow to run the gulpfile in addition to building the site!
  5. Incorporate semantic-ui cards into fedmsg feed, and possibly other aspects of site.

Stay tuned for Part II!

HelloWorld

Welcome to the world of static blogs! I've been neckdeep in a seemingly stable and wonderful project called nikola, that allows for some pretty fantastic python static site generation.

I've been using flask for all my "deploying a quick webapp to openshift" needs, which has worked out splendidly, but after experimenting with http://thisweekinfedora.org I felt like I had to dive in head first and see what all the hubub was about.

Some 4-ish hours later, here we are!

Some caveats:

  • The amazing themes listed at bootswatch.com are unvailable to me...
  • I don't even know what I don't know I'm doing wrong yet ;)
  • I'm git push -f openshift master to a fresh php5.4 cartridge
  • Scratch that, I'm no longer force-pushing to production! (but I'm not sure why nikola deploy rsync --delete started allofasudden blowing away .git/ after I had been deploying in such a way for hours this evening...)
  • I've added my Opensource.com articles to my blog via the feed_import plugin!!!
  • I've added my decause.github.io articles to my blog via hand copying the source! Def not as exciting, but still cool.

BUT!

I'm SUPER excited to play around more with programmatically generating static content with the power of python.

Shout-out lmacken, threebean, and ryansb for their wizardry and patience.

grokkingfedmsg

This is a raw dump of brainstormery had during a hacksession with Threebean.

Deps

$ sudo yum install python-fedmsg-meta-fedora-infrastructure $ hub clone ralphbean/fedora-stats-tools

The Longtail Metric

Though this was only about 90 minutes of cycling, it is the part that is burned most into my brain. This metric is all about Helping identify how "flat" the message distributions are, to avoid uneven burnout mode... aka, take the agent that is generating the most messages within a time frame (the "Head"), and the agent generating the least number of messages in that timeframe(the "Tail"), and come up with a line drawn between them. The more "flat" that line is, the more even the number of generated messages is amongst all contributors. Still unclear? Me too ;) Here's some python instead:


Logtail.analyze at longtail-gather.py

    import collections
    import json
    import pprint
    import time

    import requests

    import fedmsg.config
    import fedmsg.meta

    config = fedmsg.config.load_config()
    fedmsg.meta.make_processors(**config)

    start = time.time()
    one_day = 1 * 24 * 60 * 60
    whole_range = one_day
    N = 50


    def get_page(page, end, delta):
        url = 'https://apps.fedoraproject.org/datagrepper/raw'
        response = requests.get(url, params=dict(
            delta=delta,
            page=page,
            end=end,
            rows_per_page=100,
        ))
        data = response.json()
        return data


    results = {}
    now = time.time()

    for iteration, end in enumerate(range(*map(int, (now - whole_range, now, whole_range / N)))):
        results[end] = collections.defaultdict(int)
        data = get_page(1, end, whole_range)
        pages = data['pages']

        for page in range(1, pages + 1):
            print "* (", iteration, ") getting page", page, "of", data['pages'], "with end", end, "and delta", whole_range
            data = get_page(page, end, whole_range)
            messages = data['raw_messages']

            for message in messages:
                users = fedmsg.meta.msg2usernames(message, **config)
                for user in users:
                    results[end][user] += 1

        #pprint.pprint(dict(results))

    with open('foo.json', 'w') as f:
        f.write(json.dumps(results))

Logtail.analyze at longtail-analyze.py


import json

comparator = lambda item: item[1]

with open('foo.json', 'r') as f:
    all_data = json.loads(f.read())

for timestamp, data in all_data.items():
    for username, value in data.items():
        all_data[timestamp][username] = float(value)

timestamp_getter = lambda item: item[0]

sorted_data = sorted(all_data.items(), key=timestamp_getter)

results = {}

for timestamp, data in sorted_data:
    head = max(data.items(), key=comparator)
    tail = min(data.items(), key=comparator)

    x1, y1 = 0, head[1]
    x2, y2 = len(data), tail[1]

    slope = (y2 - y1) / (x2 - x1)
    intercept = y1

    metric = 0

    data_tuples = sorted(data.items(), key=comparator, reverse=True)

    for index, item in enumerate(data_tuples):
        username, actual = item
        # line formula is y = slope * x + intercept
        ideal = slope * index + intercept
        diff = ideal - actual
        metric = metric + diff

    print "%s, %f" % (timestamp, metric / len(data))
    results[timestamp] = metric / len(data)


import pygal
chart = pygal.Line()
chart.title = 'lol'
chart.x_labels = [stamp for stamp, blob in sorted_data]
chart.add('Metric', [results[stamp] for stamp, blob in sorted_data])
chart.render_in_browser()


Stuff to build/consider next?

Radar Charts

We must be concerned with normalizing the data, because koji will always have highest magnitude of messages. This is done by:

  1. querying all messages of a type, get the total
  2. querying just messages for that user, in that type
  3. divide usermessages/totalmessages
Daily +/-
just the diff of topic counts
weekly +/-
just the diff of topic counts

Real-time?

  • barchart with bar for each message topic?
  • array of "lights" that blink each time a message comes across the bus
  • revisit the live-gource of fedmsg :)

What can we do to improve computer education?

SIGCSE 2015 for Computer Science educators kicks off this year from March 4 - 7 in Kansas City, Missouri.

Headshot Pamela Fox

The SIGCSE Technical Symposium addresses problems common among educators working to develop, implement and/or evaluate computing programs, curricula, and courses. The symposium provides a forum for sharing new ideas for syllabi, laboratories, and other elements of teaching and pedagogy, at all levels of instruction.

Last year Pamela Fox, Computing Curriculum Engineer at Khan Academy, was part of a panel on called "Disruptive Innovation in CS Education." I spoke with her afterwards to get her thoughts on how open source fits into education and the future of computer education.

This is a partial transcript.


opensource.com

Where are you from?

I was born in Los Angeles, grew up in upstate New York. My dad is a computer science professor at Syracuse University. My mom is a rocket science programmer. My dad is launching a "big data" MOOC, so we're both very interested in this field.

Where are you now?

Now, I work for Khan Academy in Mountainview California and live in San Francisco. I went back to the west coast as soon as I could and joined Google after graduation from University of Southern California in Los Angeles. I went to Australia, then got back to bay area three years ago. I was working on Google Maps API in Developer Relations, writing articles and demos, which is basically what I do now, but for non-proprietary technology.

I first learned HTML in the 7th grade. Within a year, I made a website that taught HTML to other people called "htmlforkids" or something (even though I was a kid too.) That was probably my first "official" educational content. After that I was a computer camp counseler. In college, I organized workshops around 3D programming (started a SIGGRAPH chapter). I use Khan Academy to get better at math now.

Why free and open source software?

I really enjoy teaching, and I enjoy trying to figure out how to teach something. I find it fascinating when I put out new course, I read the comments and say, "Wow, I forgot what it was like not to know." I'm interested in humans, I read a lot about how humans work, and behavioral science. There is a lot of that in teaching people, is all about learning. I'm just learning.

I'm generally a fan of open source, and that is another reason why I'm at Khan Academy, where we do that. As a web developer, I shouldn't have to reinvent the wheel. Often I say, "Really... really? I gotta solve this problem? I'm the only one that has ever tried to do this?" No, it's just someone didn't share it. Many of these components should be open source. Some people may say, "Well, we don't have jobs if we don't have to rewrite it." I don't want to believe that we live that way. I have friends with open source projects, who have tried to make money on it, doing enterprise versions, and getting paid for support. I'm always interested in the different ways of monetizing code. I feel like that part is still has open questions.

I think we should encourage sharing, kids are used to the idea of "cheating." Someone copies your code, and they say, "Hey, that's cheating" and we have to tell them, "No, it's MIT Licensed, and it is open source."

We have to teach them that sharing is OK. We have to do a better job of teaching open source and sharing, counter to what they may see in school. I'm upset there is no representation from the coding academies here at SIGCSE. They are trying to figure out how to get people ready for programming jobs in 12 weeks. I feel like I'm here representing that industry. Half engineer, half educator. I feel like I'm representing the "meritocracy" getting a real job thing too.

What can we do to improve computer education?

Coding academies are formed by people who learned alternatively, or didn't do well in college, and they are figuring out how to teach based on industry, and their good and bad experiences in college. They have good things to say about career oriented computer science education. They should be here (at SIGCSE) too. Girl Develop It, Women Who Code, they are all doing similar work, and they are disconnected from this world. I'm not just trying to do women-friendly hackathons but newbie-friendly hackathons too. More women are newbies than men right now, so if you fix things for newbies—people who are intimidated, who don't think of themselves as superstars—you fix it not only for women, but for men that have that same situation. Right now, we have to say "this is for women/girls" but the lines are getting increasingly blurred, and maybe we won't have to worry someday, but for now, we have to bring stuff up to parity good stuff.

I'm quite interested in how we can prepare the next generation for the world with it's concerns about security and privacy. I like reading books like Little Brother by Cory Doctorow, which is a YA book that forces kids to think about these issues. I want to find a way to introduce the next generation to these issues and be relevant. If anyone has ideas on how to do that, I'd like to know.

Lead Image: 
Share
Rating: 
(8 votes)
Add This: 
Channel: 
Article Type: 
Default CC License: 

bitHound puts out features, not fires

The following is a partial transcript from a phone interview with Dan Silivestru, CEO and co-founder of bitHound.io—automated, open source, code quality analysis software.

Where are you from originally?

I was born in Romania, lived there for six years, don't remember any of it. Then my parents went to Israel for seven years, then moved to Montreal, Quebec, and lived there for another seven years. Then I moved to Ontario, and I'm still here.

When did you start bitHound?

We started in November 2013, and I went full-time in January 2014.

How many folks are at bitHound now?

There are nine of us. A CTO, COO, development team of four, plus staff to handle operations and HR.

What is bitHound, and what do you do?

We are centered around the concept that writing code is easy, but building resilient, remarkable software is difficult. There is much that can be told as you go along. We analyze projects from conception to today, pointing out hotspots that require attention, and suggestions on how to fix them. We track code as you move forward, so we can say if things are getting better or worse.

Something I'm proud of is a feature that showcases the dependencies that your projects have from npm and Bower, for example. It helps you understand the code that you bring into your project, and then rank it from a quality perspective. The dashboard shows you up-to-date or out-of-date status, as well as assigns you a bitHound score that is derived from Code Quality, Maintainability, and Stability. You can then pick better dependencies based on quality level. You can really dive in with bitHound.

Does bitHound support other programming languages besides JavaScript?

It is JavaScript only for now, but in the future there could be more. Rather than just the bare minimum, we think that to provide value, we've got to do a deep dive into a language. We run almost a dozen different "critics" or analyzing engines, to get "actionable insight."

(Remy: Completely understandable. Source code analysis is not what I would call a "trivial" problem...")

It is not an easy problem, it takes a lot of time and effort. We've been at it for for about a year, and it is still in closed beta.

What is it like for a bitHound user?

We strive to make the user interaction with product very simple. We think that if your software needs a manual, you're probably doing some things wrong.

The experience is simple: use OAuth for GitHub. You enable bitHound on a per-repository basis. We run our analysis, and it takes 2-20 seconds, and then we fill-in the timeline going backwards.

The idea was, on the first dashboard, you would get an "eagle-eye" view: the top five priority files, and you can expand the list further. We've had many users who are new to the concept of quality concerns such as linters, duplicate functionality, etc. So, rather than presenting an overwhelming amount of information, we present the top five most worrisome files and annotate the code with issues, so you can filter and address them. You can see on your dashboard, which dependencies are out-of-date, and we have some upcoming security analysis features in the works too.

(Remy: This sounds like it would be useful for researchers. Students in my HFOSS course at RIT have to do repository analysis as part of our "Commarch" assignment each semester.)

We have some students who use our products, and are getting introductions to professors. It seems only recently that source control is even being taught at the college level. When dealing with JavaScript, which doesn't really benefit from compiling, linters are a life-saver. The students really appreciate it. We consider these very simple things.

A big part of what we want to do behind bitHound is answer: "How can we get people to build quality code?" You have to treat your job as a craft. It is craftsmanship, and proper tooling around making software.

When did software craftsmanship become a passion for you?

It is one of those things where you get burned. You get burned in production once, twice, and then again. Then you say: "How did I get here?"

I'm self-taught from a software development process. Much of what I learned was learned "on the fly." Having gone through institutions, some left me better, some worse. The first five years of my career was focused on delivering features on time. Then I got introduced to this concept of: "If you are going to cut corners, you need to document it." When we started doing tests, though the upfront work was higher, six months later, we saw big benefits. Even as systems got more complex, you have safe-guards in place. You can go back and fix it. We were able to keep our bug-count down.

We were putting out features, rather than putting out fires.

Per feature costs, we're much lower. In the long-run, it allows your organization to move forward at a steadier, and faster, pace. Then again, at other places I've joined, they were on their second or third full rewrite. It didn't happen overnight. I didn't just wake up and say: "Test test test." It wasn't until after getting burned...

How did you get into software development?

At the University of Waterloo, honors science, then honors physics. Then I took time off to make money. While I was working at a company, I had a friend working in IT, while I was working on phones. I asked him: "IT? What do you do?" He showed me AS/400 systems and greenscreens. I asked: "How do I do that?" And the next thing I knew, I was sitting in front of the VP asking to do it.

I got a AS/400 manual, and the opportunity and big break to do that. I did that on my own time for a few weeks, and after a few months there, I said: "This is the career I want" and never looked back. I had some tremendous mentors along the way. I was there for a few years, then went into "e-business" doing consulting.

What makes a good mentor? Where and how do you find them?

The number one trait for mentors I've had, to this day, was selflessness. They were doing it for the pure joy of helping someone else develop their craft. They are not about: "I'm teaching to get something out of you for free." Obviously, they have to be knowledgeable, but you can tell more about them as a mentor by how they carry themselves.

If the first five years of my career was about: "How do I code?" After that it was about defining components that interact together. Later on, after my first consulting position, I had a new mentor, with new questions.

Dan: "We should write tests... Why?"

Mentor: "How do you know what the interface between components really looks like?"

Mentors have to be good at their craft, but you can tell a lot by the questions they ask. Listen to how they go about development and the way they ask questions.

What does your day-to-day look like?

I wrote quite a bit of code when we started, but I'm sure much of it has been rewritten. Since we announced funding in late November, it has been more about investor follow-up for me. I've matured within the code environment and in the running-a-company environment. Mostly I'm steering the ship. Day-to-day, lots of emails, some interviews like this one, working with team to set priority/strategy, and yes, I still write some code. I'm not anywhere near the critical path anymore though :P

What are your feelings on free and open source software / FOSS?

I was a co-founder of a company, tinyHippos, that we founded in 2009—which was acquired by Blackberry in 2011. One of the visions, was open sourcing the Ripple Emulator. That happened, and it was fantastic. There were only three of us that moved over, in terms of "how you run a project in FOSS."

I'm proud that they took this project, and donated it to the Apache Foundation. It sits side-by-side with PhoneGap. There was great experience in "how to foster a community" and "how do you be a BDFL?" (benevolent dictator for life). Someone who moves a project forward in a community.

We've loved FOSS throughout our career, and use it constantly at bitHound. Our analysis depends on many popular frameworks. JSHint for linting, esprima, async for callback structure, ZeroMQ for distributed parallel computing platform. You can check out our talk at JSConf last year about distributed complex computing.

Any others?

Yes! We make use of over 80 open source projects throughout our solution but a few that come to mind are d3, jquery and Polymer in production.

Where is bitHound contributing back?

Right before starting this company, Gord Tanner was the core contributor to the Apache Cordova Project; he created of the Ripple Emulator, which he donated to the Apache Foundation and is used today by Microsoft, Intel, Adobe, and over 250K developers. He unified the platform in a coherent manner, and still contributes there. For bitHound, he is a co-founder and CTO leading the technical development of services.

bitHound has simple philosophy, while we're in heavy product build, is about product, but we have come across projects in our stack that have issues. We always contribute any fixes or additions back into that project. That is our standard operating procedure. If we need to make a specific change for use that has no benefit to community at large, we don't push it, but if we fix a bug or feature, we contribute it back upstream always. That is a recommendation to any company out there. If you are going to get something for free, from someone's hard work, then if you enhance it, you should contribute it back so all can benefit. Otherwise, the community would die if everyone consumed and no one contributed.

Internally, we have components that we think will be beneficial. There will be a prominent links on our site to our GitHub.

One component is what we call "The Farm" where we have workers assigned to do work in parallel. A simple event bus really, with aggregate results coming back. We're all often dealing with the single-threaded nature of the JavaScript language, and you'll see us trying to open source that.

One thing that has happened internally with The Farm, is we've already abstracted into a separate project, to be a released. This is one of the things I've realized in my career—all of us have—just putting something on GitHub and calling it "open source" is not enough for it to take off. It must be prepared and ready for the community. That means proper README, proper docs, proper instructions for looking at project and contributing, beyond downloading and installing. Anyone can npm install but it is a different thing to make it so that contributors can understand how to augment. We will be taking our time putting our code out there, because we wanna do it right.

Any final message or parting thoughts?

Don't just write code, consider what you are doing as a craft. It takes time, and practice, and it takes time to build something that is resilient and beautiful. Open source is a great way to perfect that craft. One reason we built the dependency tool into our product was to get more people diving into code they can contribute to. Seeing other people's architectures, will expose you to better approaches.

This is sort of what spawned bitHound.

Software development is a craft, and you should be proud of it. Take the time, learn the craft, and strive to build masterpieces—the ones that gather great attention in the community. We're in this to do more than just make money, and bitHound will be free forever for open source, with no restricted features. We're huge believers in this movement. We participate, and we want to help.


This work by Remy DeCausemaker is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Special thanks to @itssamlowe for her contributions and edits.

Lead Image: 
Rating: 
(7 votes)
Add This: 
Channel: 
Article Type: 
Default CC License: 

The elements to a better future for software

In this interview, I take a deep dive into the life and motivations of Kyle Simpson, an open web evangelist and the author of the book on javascript, You Don't Know JS. Find him on GitHub and see his many projects and posts on Getify.me.

Where are you from?

Oklahoma City, born and raised. Started school in Oklahoma, but now based in Austin, Texas—since mid-way through college. I live there with my wife and two kids. I moved to Austin because there wasn't much of a tech community in Oklahoma back in the 90s, and Austin was the nearest big tech hub. Now, I go back to Oklahoma to visit and see they have a fantastic community there, and I'm jealous! It's great to see!

Where did you go to school?

I started at University of Oklahoma, then transferred to and graduated from Texas State University with a B.S. in the engineering track of Computer Science.

What is your day-to-day like?

I have two different kinds of days; days where I speak/teach, and days where I do FOSS development. On teaching days, I'm connecting with community, and teaching JS to make a living—mostly in a corporate workshop environment, or public workshops associated with conferences. Those days I stand all day, and teach, and lecture, and walk people through exercises.

When not on the road doing that, I'm participating in the FOSS community—writing code, or writing blogs, or books. I spend lots of time doing that, constantly on GitHub with commits and pull-requests flying everywhere. I currently have a 300+ day streak going on GitHub—not to show off—but to inspire others to do more, and more regularly, with FOSS contributions. If my streak can encourage one person do one extra contribution, that's what it's all about.

The best way to describe it: 50% of my time I spend teaching to pay bills, and 50% donating time to the FOSS community, to build awareness around the web platform and its technology with the theory of "all boats rise with the tide." The more people who learn and appreciate web tech, the more people will hire me to teach it to them. I'm an avid learner of things, and the best way to learn is to teach others. I think "how can I make this make sense to others?" As soon as I learn something, I write code to explain it, find a book or post to describe it in, and if I find something I didn't understand, branch off, and learn more, then start the cycle again. It just gets deeper and deeper and deeper.

I said a while back that, "I think it is important for developers, especially those breaking into industry, find ONE thing you love to learn, and master it." It may not be the one thing you write all the rest of your code with, but it is the process of sticking with something to mastery that is valuable. Don't just jump from thing to thing to thing. While you may get a good paycheck doing that, there is something missing from the art of deeply understanding something. Once you've accomplished that, and you know what there is to know, then branching out to try things is great! Be looking while you branch for that next thing you want to master, rinse, and repeat. Constant jumping around as a "jack-of-all-trades-master-of-none" was more relevant 5-10 years ago. What is missing now is people who really know what they are doing.

Our industry currently rewards "flexibility" and working at the whim of someone else. "Yesterday, we wrote everything in Angular, and today, we're going to rewrite everything in React..." After enough of those inflections, you "become" a senior developer, but you miss out on appreciating a technology in the way it really deserves with deep understanding.

Mastery? How?

Well, specific answers are variable. Angular will be much different than Node. In general, the important skill is the curiosity and desire to learn. Don't just read a line of code and say, "I guess that is just how it works..." Keep reading, and keep following the rabbit hole down until you can say you understand every part of that line of code. I tell my workshop attendees that I don't expect you'll write your own framework, but that you could. Don't treat frameworks as black-boxes—you need to understand them intimately. If you choose something, know how it works but also WHY. Knowing when to change comes from understanding why—not because there is a great book, or how many "stars" the repo has. Those are poor signals. Beyond understanding of the open source community, your own understanding is the strongest signal.

You don't have to reinvent the wheel, but you should understand how the wheel rolls before you decide to bolt it onto the car you're building.

How did you get started in FOSS?

I was working for a company, not as a developer, but as a "User Experience (UX) Architect." I worked in project management team prototyping User Interfaces (UIs), and handing them off to the dev team. Inevitably, everything I wrote was just put into production, or adapted slightly. I was working on a project in 2008 that needed to make cross domain Ajax requests, and back then it was a real pain. I needed a solution to prove out my concept for the app, and I said, "I know some Flash, and I know that it can do that." So I built a JS API wrapper around an invisible flash file, with the same API as the XMLHttpRequest (Ajax) object, and I called the project flXHR (flash based XHR).

Once I got it working, I thought, "Maybe other people will find it useful?" so, I released my code as open source. Back then, open source was pre-GitHub, so source was all on my website, and I pointed people at it from blog posts, etc. I also put code on Google Code too, but there wasn't as much of a community back then either. In early 2009, I wanted to get into conference scene. 2009 was the first big JavaScript-specific conference, JSConf, and so I decided to go and speak about SWFObject (one of the most downloaded projects on the web at the time), which I was using heavily in flXHR. I was a core dev for SWFObject and gave a "B track" talk at the conference. Only like three people showed up to my first talk, but I fell in love with the idea that I could speak to call attention to open source code and inspire others to help make it better!

The fullness of my open source perspective came later that year, in November of 2009. I released the project I'm probably most known for: LABjs (a performance-optimized dynamic script loader). I gave a talk at JSConfEU in Berlin Germany about script loading. Two hours before going on stage, I was overhearing lots of people talking about this new site called GitHub, so I went and signed up while I was sitting in the audience. I pushed all my LABjs code there, and that was my first official: "I am in the FOSS community" moment.

One thing you wish undergrads would be exposed to before the leave school?

Unquestionably, "Simple Made Easy," a conference talk by Rich Hickey, who works at Cognitect on the Datomic database as well as the Clojure language. He's a completely brilliant dude. The talk is so important to me, I don't have just have it in a bookmark, but on my toolbar and reference it practically daily. The premise is, there are two terms that people conflate: "simple" and "easy." He actually compares "complex" versus "hard." The root word for complex comes from "complected," as in strands of rope being braided together. Highly braided code is complex, and harder to maintain and refactor. Software developers, when building, they focus on building "easy" software—that is easy to install and use, and does a lot for you. That pursuit often results in complex software.

If developers go after modular simple (non-complex), non-braided software, they can often end up with easy software too. If you go after easy, you usually end up with complex, but if you go after simple you can also achieve easy.

Node.js is my example. I was trying to install it on a Virtual Machine, and had the operating system (OS) requirements, but couldn't install it because the OS didn't have the proper version of Python. Node, a JavaScript framework, uses Python for its installer? Why do I need Python to install Node??? The answer was because writing a cross-platform installer in Python was easier... but when you add additional braiding, you can also make it more complex to implement and maintain.

Nearly every framework on the planet claims to be modular, but most are not. Modular, to me, means that a piece could be removed, and the framework would still be able to be used. "Separate files" does not a modular framework make, if all those pieces are required for the framework to work! My goal, my desire, is that developers go after simple modular design, and that be the most important ethic. What comes from that then, is proper design that can be made easy for people to use. We need to stop worrying as much about creating pretty-looking, "easy" interfaces, and instead worry a lot more about making simple software.

What is your toolchain?

Sublime is my text editor. In principle, I love browser-based editing, but I run the nightly versions of my browsers, to find bugs early while I still have a chance to get them fixed. I can't handle browser crashing and uncertainty for when I'm writing code.

Sublime has so many plugins you can use for whatever you want. Though I don't use many, other people like the "intellisense" plugins, and many other plugins that are part of a great ecosystem they've built.

My other main tools are the browser developers tools in Firefox and Chrome.

My other mission critical tool is the Git command-line tool. GitHub is my graphical git client, because it effectively augments my usage of the git CLI.

Git. How do you use it?

I don't have a lot of fancy process, it depends on whether I'm writing a book or writing code. For books, when I make a change, I want to write a coherent section, and make one commit per section. In the writing of one section, I may add to the Table of Contents, or clarify another section, or add another. Whenever I have a logical series of changes, I git add each individual file, (files written in markdown BTW), and git commit -m. In the commit message, I list which book the commit portrays to, which chapter(s), and a quick description of what it was about. The commit history of the book series really tells a story in and of itself, of how over months I figured out how to write them, section by section, reorganization by reorganization!

I typically use git commit -m ".." && git push, so that I push right after committing.

It is not often I do batch committing, usually only when I've been on an airplane without wifi for awhile, in which case I'll push 5-8 commits at a time once I get back online. Usually, I try to push right after I finish the section I'm working on.

For code, I have two different strategies. If it is a "big" feature, I create a feature branch, and I put several commits into the feature branch. The goal isn't to finish the feature and do a massive merge, but to regularly merge. I like to develop in stable batches, merge regularly, and don't do harm to master. If I do make a bugfix on master while developing a feature, I rebase the feature branch to get that fix in. I don't necessarily do short lived branches, but I do short lived differences. :)

Many devs do squash merges, and want to appear to have "Dreamed up this perfect feature and written it perfectly all at once." I don't want that. I want to preserve the history. In rare cases with a pull-request that has lots of individual commits that are all logically connected, I'll do a squash-merge.

In cases where I have a simple bug fix to make, I'll generally just add and commit directly to master. Regardless, Every time I'm doing the final commit, I'm committing both the docs and tests. I Firmly believe that it isn't DONE until it has docs and tests. I don't really do Test Driven Development (TDD), but Test-oriented or Test-informed development. I have a set of tests, and sometimes they are written ahead, but the typical plan is "I don't know how it should behave" when I fix something with a new feature--it will take me working through implementation to know. I develop the tests along with the code--code and test--rather than writing code after tests or the other way around.

I'm much more formal when working on other peoples' projects, or as a bigger team. I try to stay away from scenarios where I need the complicated cherry-picking or interactive rebasing features of Git. I've done those things only a few times in my career. I use GitHub for most of those things, and it handles those cases pretty well. A pull-request with 2-3 commits, whatever their process was, is something that is useful to preserve in the history, so I'll usually just merge it as-is.

What are you currently working on?

Other than my books, I have three main areas of project interest I cycle through on any given week.

Number 1 that gets most of my interest is asynchronous ("async") programming patterns (promises and generators, that sort of thing). I have a library called asynquence, a promises-style asynchronous library. It can also handle generators, reactive sequences, and even CSP. (see: Hoare's seminal book "Communication Sequential Processes") with these higher-level patterns layered on top of the basic "sequence" abstraction. Most other libraries have just one flavor of async programming, but I've built one that can handle all the major patterns. I think async is one of the most important things that JS devs need to get up to speed on. I've got several conference talks and projects about that topic.

We're recognizing more and more that sophisticated programs need more well-planned and capable async functionality. Callbacks alone don't really cut it anymore.

[Remy Decausemaker: Yes, I reckon this jives well with Python incorporating Tulip and features from Twisted into the core librarystarting with Python 3.3.]

Number 2 is in the same vein as the "compile to" languages for JS. Experimentation is important for the language. Taking that to it's extreme, I have a set of tools to define custom JS syntax and transpile to standard JS—basically, standard JS + custom syntax. I'm working on tools that do "little" transformations on your code. The bigger picture is "inversible transforms" or able to transformed in both directions, non-lossy transformations. If you can define them two-way, you can have your "view" for your own editor, and a "view" for the team repository. You check code in and out, and you can work on code the way your brain works, and the team can work in the way their's does.

When you use CoffeScript for example, it is a lossy transformation, and an "all or nothing" decision. Everyone needs to work on it in this way, or not at all. The simple version of what my tools can do is simple stylistic things like spaces versus tabs. The tools can change that code style for you instead of just complaining with errors.

ESRE is one such tool I'm building for two-way code-style transformation.

let-er is another tool that transpiles a non-standard version of JS block-scoping into standard JS block-scoping. I have a series of in-progress prototypes of these various tools, and eventually I can go back and write the overall "meta" tool that drives them with the two-way transformations.

Number 3 is a crossover between JS/CSS. It is a project in the templating world. There are two extremes in templating; zero-logic templating or full programming language templating. Zero-logic templating includes projects like Mustache. We don't want business logic in the views, so we use no logic at all. But in practice, this creates very brittle controller code that's closely tied to the structure of the UI, and that brittle connection is precisely what we wanted to avoid by keeping the concerns separate.

The other extreme, is you have a full programming language in your templating. My metaphor is "if I hand you a pile of rope, I can teach you to build a rope-bridge, helpful, or a noose, which isn't quite so helpful." If you are in a "15-minute-must-do-feature crunch" you'll just drop in if-statements and function calls, and then put a TODO comment to fix it, but then you rarely do. That's how we unintentionally leak business logic into our views.

Neither extreme is good enough. We need something in the middle, that has enough logic for structural UI construction, but keeps out all the mechanisms that you can abuse to do business logic.

For 4-5 years, I've experimented with a templating engine that is a happy medium, called grips. It has enough structural logic, but is restrained so that you can't do things like function calls, math, etc. It's mature enough that I use it in my projects and have rolled-out production websites with it. It is definitely a work-in-progress, but it is "stable enough" to be used. People still to bikeshed about the syntax for sure and may not like the choices I made. But I think I at least asked the right questions, like: What does a templating engine need or not need? I started with nothing and only added features when it was necessary to do structural stuff. You have basic looping and conditionals, but in limited fashion. I summarize that balance as, "if you find yourself unable to do something, it should be a signal that you don't need it in your templating engine."

Two years ago, I started watching the rise of LESS, SASS, and other tools like COMPASS. What struck me was how limited they were in solving the problems I thought were important in CSS world. Those tools require the CSS to be recompiled every time you make a change. "Compile a HTML template, re-render with external data" is a solved problem. For some bizarre reason, it didn't occur with CSS though.

So, I invented grips-css, a CSS templating syntax similar to LESS, on top of the core grips templating engine. Most importantly with grips-css: data is external (i.e. CSS variables), which means all the data operations that projects like SASS are inventing declaritive syntax inside of CSS to handle, instead you can and should do those data operations outside of CSS, producing new data and then just re-rendering the template.

If I wanted to change "blue" to "red," I don't need to recompile all my CSS, I can take my pre-compiled CSS, and just re-render it with the different variable data.

The compiled CSS template is basic JS, which means you have the option of re-rendering CSS dynamically in the browser on the fly, for example responding to changing conditions with CSS. It's much cleaner to simply re-render a snippet of CSS and inject it into the page than to use brittle JS code to change CSS style properties. Of course, you can also run grips-css on the server much like you currently do with current preprocesors. The point is you have both options with grips-css, instead of being limited to server-only and inefficient total recompilation. What I'm trying to do is suggest that the spirit of what SASS and the others are going for is good, but the way they are going about it is limited and not terribly healthy for the future of CSS.

CSS templating is, I think, a much cleaner and more robust way to push CSS tooling capability forward.

You mentioned important problems to solve in CSS? What are they as far as you are concerned?

Three main things were solved in LESS. Variable data, that can be changed and reused, structural things like mixins to achieve DRY coding. And, extends, which is a light version of polymorphism to override pieces of templates. We needed to solve those things, and they did, but as I said, we solved this in text templating years ago, and we should apply those same principles from HTML/text templating to the world of CSS. There's no reason CSS needs to invent its own solutions for these problems.

So, what is next?

Putting on the "prognosticator hat," what do I think we'll see in the next 3-5 years?

Applications are going to become "UI Optional." The new Apple Watch has a pretty limited display, and some apps won't show anything at all. Things like Google Glass, or Oculus, you'll have apps that don't have any visual representation at all. This is what I call the coming "APIs-as-Apps" era. Your "app" might be nothing more than a piece of code that can send and receive data—a distributed API. We have some companies that build apps that care greatly about branding. Twitter wanted you to experience their app the way they wanted. Facebook wanted you to experience the Facebook app the way they wanted. But there is a reality that people will experience apps without your UI at all. Companies must give up control of the presentation, as our devices and interactions with them diversify from purely visual to audible or tactile interactions.

My watch may read things to me without UI, and that is nothing more than a data operation. Facebook should provide the text for my watch to read to me. The UI doesn't necessarily go away, but it becomes an optional add-on to apps. In the longer term, I'd like to stress the decoupling more. We see people building single-page, complex, front-end driven apps. Most of the app is in the front-end. Gmail is cool to use, sure, but I don't think they are very flexible in that new optional-UI trend. It will be hard to separate Gmail the App from Gmail the UI.

Developers are making assumptions about access to unlimited, fast bandwidth with every retina image served up... We're not designing things in layers the way we know we should be. For all the people on slow connections it's just, "Meh, they'll get better access eventually." We need to give users tools in the browser to choose what is important to them. I should be able to say, "No, I don't want a huge single page Gmail app, I need a simple post-in-page mobile version." This is much more than just expecting a "mobile site." We need layered sites.

We need to take a serious look at how much we assume that UIs and data bandwidth usage are an unlimited resource. This could be like "responsive 2.0"—responsive not just to screen layout, but to network conditions too. The app should figure out that I am roaming and not shove everything at me it possibly can. UI needs to be decoupled, simplified, layered, and more focused on portable apps.

I heard a conference talk years ago from PPK (Peter Paul-Koch). He suggested, "Why is it I can't send a text to share an app with you? Why do you have to buy it from an app store?" He proposed that monetization would shift from the app to the data. He believes apps should be self-contained portable pieces of code that can be freely shared around regardless of device. JS is great for this because it is ubiquitous. For instance, if Facebook wanted to charge me for data, because there was no UI on my device within which to serve ads to me, I should be able to decide if I want to pay them for the data of my updates.

I hope that kind of thing represents the future of the web and the usage and consumption of apps.

Lead Image: 
Open source code for a better food system, code with grass image
Rating: 
(9 votes)
Add This: 
Channel: 
Article Type: 
Default CC License: