Quantcast
Channel: the evolving ultrasaurus » code

getting started with drupal

$
0
0

Emeline Glynn and Anthony Glynn gave a helpful talk called “How to teach anyone Drupal in 7 months.” The timeline was based on their experience where Anthony, an experience Drupal developer, taught Emeline remotely over a period of 7 months, to the point that she is now working professionally as a Drupal developer. Slides posted here.

Emeline notes that you need a passionate & dedicated student. It can be very frustrating, especially as someone new to development, especially learning command line tools, git and debugging, but she was excited about it and found it very rewarding even when she was very new. She usually spent 4-6 hours a day. Anthony noted that if someone has experience with another framework, you can expect the timeline to be 2-3x faster.

I liked this breakdown of stages in learning:

  1. Learn the Language
  2. Get the Skills
  3. Cross the Chasm
  4. Leave Footprints

For me, as a developer with 20+ years of experience, I pick this stuff up fairly quickly.  I spent a few solid days on getting my dev env setup, understanding the major components and making small changes to a module.  As important as the code, is understanding its patterns and jargon.  The immersive experience of CapitalCamp with its enthusiastic community and these references to key learning resources has significantly accelerated the learning curve.

This learning path focuses on the non-programmer; however, Anthony suggested (and I agree) that the experienced developer would take a similar path.

  1. Register at drupal.org
    drupal.org/planet should be your home page, with news feeds of all the best blogs online
  2. drupal.org/security – the Drupal security team meets once a week and issues security advisories less frequently — they also have a mailing list.
  3. Meetups — Anthony learned by reading Pro Drupal Development & going to meetups. (He noted that the community is a bit less friendly on line.) Meetups provide inspiration and the landscape of what to learn.
  4. Setup your dev env
    Recommended: Linux, git, Drush, PhPMyAdmin, Firebug or Chrome Developer Tools
    (Linux much easier than Windows)
    On Mac OSX  it is pretty easy too, but finding good setup instructions was hard.  I posted the steps I the I use and have since added apache vhost config so I can run multiple drupal sites easily.
  5. Embrace the command line! students may be freaked out at first, but it makes you very productive. If the student is non-technical, there are a lot of skills that are not Drupal that you still need to learn (e.g. command line, git)

Consider starting with a distribution profile

  • Drupal Commons: if you want to manage communities
  • Drupal Commerce: e-commerce
  • OpenPublic: government site

Terms & Landmarks

  • API api.drupal.org
  • node: a piece of content
  • module: a Drupal extension written in PHP

Helpful modules

  • administration menu: allows you to hover on Drupal menu items and shows submenus (drill down)
  • coffee: type in the page you want to go to and link pops up (also for the admin UI)
  • module filter: allows your module page to look clearer
  • devel: key helper module, allows you to access those variables,
  • develthemer – helps you develop themes
  • views – pretty much on every site
  • features – export your config (config + content is in the database)
  • panels & context: each puts you don’t a certain path
  • context – if you are in this part of the site, show this
  • panels – let you pick

Learn about themes – Omega, Adaptive theme and something else.  Also, subthemes.

Build

Debugging tips

  • “Calm down and clear the cache”
  • become fmiliar with the apache logs and drupal watchdog logs

Make students feel adventurous! Db backup + git makes it safe

More tips

  • codecademy – good for PHP
  • make sure you understand the hook system
  • look & use other people’s code
  • write patches
  • write your own blog
  • PHP function: debug_backtrace

litmus test / goal is to write your own modules


empowering content creators with drupal

$
0
0

Bryan Braun (@bryanebraun) gave a refreshingly opinionated talk Empowering Content Creators with Drupal.  Coming directly from the Ruby and Rails communities where a core value is to articulate best practices, it is great to see this kind of guidance from a member of the Drupal community.  (slides here)

Bryan referred to a blog post by Dilbert creator Scott Adams on the Botched Interface Market, where sites like Orbit and Travelocity had such poor user experiences that they inadvertently created a market opportunity for new sites like Hipmunk.  Bryan’s goal is that Drupal should not become a botched interface market.

He organized his advice into two categories:

  1. More Control: Power Tools, Reducing Risk
  2. Less Confusion: Better Interface, Streamline Workflow

Bryan highlights borrowed techniqes from other platforms (mostly WordPress), and points out lots of “low-hanging fruit” techniques.

Example: the default layout for a node has a lot of labels and whitespace which doesn’t contribute to understanding the page.  The default UI for a file attachment is not concise.  Compare Gmail’s message compose interface vs. what it would be in Drupal (~1/3 of your screen vs. a page that is two screens high!)  Having a very long page can contribute to confusion, since you have to scroll to get to some of the functionality.

Larry Garfiled’s blog post Drupal is not a CMS points out that Drupal is something you use to build a CMS.  Perhaps we should think of it as a CMF, a Content Management Framework.  In this case, we are designing the workflow and user experience of a new custom CMS that we build with Drupal.

Myth: I am not a designer
Fact: um.. you actually are. Whether you identify as a designer or not. The decision you make when you are building a site has an effect on the end user — these are design decisions!

Myth: it will take a long time and effort
Fact: it could but it doesn’t have to

More Control

We can make our content creators more productive by giving them “power tools” while at the same time, reducing risk that they will make mistakes will give them confidence to move faster.

WYSIWYG

It’s not really an option anymore to tell people to edit HTML. [My personal perspective is that even though I am perfectly capable of HTML markup, why should you make me type extra characters? why should I have to learn the markup that works with your stylesheet? Though I do like the option of editing HTML when the rich text editors fail me.]

The following modules are good for WYSIWYG and File Upload:

  • wysiwyg or ckeditor which both appear to be rich text, WYSIWYG editors
  • media or imce for uploading and managing file (also seems to be an either/or choice

Node Editing Power

Always check “Add to Menu” on the Node Edit page — for most content creators, if a menu isn’t there, the feature doesn’t exist.

With context, use a module with Context Node which can pull out just a few context options and put it on the Node Edit page.

As a themer, you can make a bunch of templates and the content creator can pick with template -picker module.

Use Short Codes

Short code are a best practice from WordPress and can accelerate content creation:

[video:url]
[[nid:123]]

Drupal Modules:

  • Caption Filter
  • Video Filter works with the wysiwyg module or with TinyMCE or CKEditor (not sure why those are in a different category).
  • Insert View lets content editors add views without editing PHP
  • Make your own

In Drupal we tend to expose this kind of functionality as a block, but that gives the power to the site builder, rather than the content creator. In Drupal, the modules that do this tend to be called filters.

Reduce Risk

Always put description text when you are creating new fields (you probably won’t come back later). If you are going to come back later, you can improve it then — at least put something in.

  • Autosave
  • Revisions: just enable it by default
  • Preview — poor experiences with the preview button, not always what you expext
  • Turn off preview button on content types page, use use view_unpublished module instead

Granting Appropriate Access is not an easy fix.  You need to understand how your organization works, know the users, watch them work, etc. Once you do know those things, you can set up good workflows for them with clear options for the different roles in the organization.


Less Confusion

Admin Themes can help

  • Rubik is less cluttered: 2-column node-edit page, description text on
  • Ember is responsive

Logical Grouping

Group things according to how your content creators think about them, not how you built them. Consider grouping admin menu items into safe vs. risky like WordPress does. Bury advanced and less-frequently used functionality.

  • Fields into fieldsets & field collections
  • Admin Menus for content creators
  • Permissions into Roles
  • WYSIWYG Icons

Default to expanded for highly recommended options, collapse for optional, and hide anything that is unnecessary.

  • Use conditional fields (never show someone an option that does nothing
  • Simplify hides fields
  • Jammer hide many other things)
  • Hide Submit Button avoids double submissions and just an importantly communicates to the creator or editor that the site is actually doing something
  • Preview Button (see above)

Streamline Workflows

  • Use Chosen instead of the default multi-select
  • Add Another is great for repetitive node creation.

Dashboard

There are lots of great dashboard modules.  Consider what your creator or editor wants to see the most — make that their default homepage.

“the best tool of them all… …is the one that works best for your users”

It looks like I’ll still have to do quite a bit of experimentation, since Bryan often points to multiple modules to solve the same challenge — still a great reference to address common concerns and highlight best practices.

rails 4 with mongodb on osx

$
0
0

This post covers getting started with Rails 4 and Mongodb, using the Mongoid gem. As part of getting up to speed, I enjoyed reading Rails + Mongo take on the object-relational mismatch by Emily Stolfo from MongoDB.

For starters, here’s a how to create a super simple toy app. I assume Ruby is installed via rvm, but you are new to Mongo.

Install Mongodb on Mac OSX

brew update
brew install mongodb

To have launchd start mongodb at login:
ln -sfv /usr/local/opt/mongodb/*.plist ~/Library/LaunchAgents
Then to load mongodb now:
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist
Or, if you don’t want/need launchctl, you can just run:
mongod

Rails 4 with Mongoid

I chose Mongoid over MongoMapper (see quora, stackoverflow)
I used MRI 1.9.3 (at this writing, Mongoid supports HEAD but not 2.0)
rvmrc:

rvm use ruby-1.9.3-p429@rails4 --create

added to Gemfile:

gem "mongoid", git: 'git://github.com/mongoid/mongoid.git'

on the command-line:

rails new mongo-people --skip-active-record
rails generate mongoid:config
rails generate scaffold person name street city state
rails s

Woo hoo! We’ve got an app — looks like a vanilla Rails app from the outside, but it is different on the inside:

class Person
  include Mongoid::Document
  field :name, type: String
  field :street, type: String
  field :city, type: String
  field :state, type: String
end

No database migrations needed. If we want a new field, we could just declare one in the model and add it to our views. I assume migrations could be used for data migration, but that would be a subject of another post.

References

Rails 4 with MongoDB Tutorial

rails 4 twitter omniauth with mongodb

$
0
0

If you are brand new to MongoDB and Rails 4, take a quick look at my very basic rails 4 mongodb tutorial before diving into this one.

Gems: mongoid, omniauth, figaro

Let’s get started

Make sure you have Rails 4 (rails -v). We’ll make a Rails app skipping test-unit (-T), since I prefer RSpec, and omitting ActiveRecord (-O) since we’ll be using MongoDB.

rails new parakeet -T -O
cd parakeet

Add the following to the Gemfile

gem "mongoid", git: 'git://github.com/mongoid/mongoid.git'
gem "omniauth-twitter"
gem "figaro"    # key configuration using ENV

Now some auto-code generation for quick setup:

rails g mongoid:config
#      create  config/mongoid.yml

rails generate figaro:install
#      create  config/application.yml
#      append  .gitignore

I’ve decided to use figaro which allows me to easily configure my API keys without committing them to my source repo, which is very helpful when posting open source code. We need to set up the app for an API key in order to auth with Twitter.

Get Developer Key from Twitter

Sign in using your regular Twitter account at: https://dev.twitter.com/

Then in the upper-right, select “my applications”

Click “Create a new application” and fill in the form. I called my app blue-parakeet for uniqueness — you’ll have to make up your own name.

Make sure you put in a callback URL, even though you won’t use it for development (since omniauth tells twitter the callback URL to override this setting) — if you don’t supply one you will get a 401 unauthorized error.

Read and Accept the Terms, then click “Create Your Twitter Application”

Now you have a “key” and “secret” (called “consumer key” and “consumer secret”) which you will need to configure your rails app.

Using Figaro gem for Configuring API keys

Edit config/application.yml

# config via Figaro gem, see: https://github.com/laserlemon/figaro
# rake figaro:heroku to push these to Heroku
TWITTER_KEY: ABCLConsumerKeyCopiedFromTwitterDevPortal
TWITTER_SECRET: XYZConsumerSecretCopiedFromTwitterDevPortal

Configuring Omniauth

Edit config/initializers/omniauth.rb

Rails.application.config.middleware.use OmniAuth::Builder do
  provider :twitter, ENV['TWITTER_KEY'], ENV['TWITTER_SECRET']
end

Now Omniauth is already setup to auth with twitter. Let’s run the server. Install mongo with brew install mongodb if you haven’t already. Also, if you don’t have mongo set up to run automatically at startup, then run Mongo:

mongod

Then run Rails server:

rails s

Go to http://localhost:3000/auth/twitter and you’ll be presented with twitter auth

However, when we authenticate, we get an error, since we have’t configured our routes yet:

Create a Sessions Controller, Add Routes

Next step is a sessions controller and a route for the OAuth callback. We’ll make a placeholder create action that just reports the auth info we get back from Twitter.

On the command line:

rails generate controller sessions

Edit the newly created file, app/controllers/sessions_controller.rb

require 'json'
class SessionsController  request.env["omniauth.auth"]
  end
end

add the following to config/routes.rb

get '/auth/:provider/callback' => 'sessions#create'
get '/auth/failure' => 'sessions#failure'
get '/signout' => 'sessions#destroy', :as => :signout
root :to => redirect("/auth/twitter")  # for convenience

Now go to http://localhost:3000/auth/twitter — after authenticating with Twitter, you will see the user info that Twitter sends to the app from the authentication request (see docs for explanation of each field). The general stuff which is more consistent across providers is in the ‘info’ section, and most of the interesting twitter-specific info is in the “extra” section:

User Registration

For this app, we’ll use a simple user model, just to show that there’s no magic here — we’re only using Twitter auth not storing our own passwords, so we don’t really need the full features of the lovely Devise gem.

rails generate scaffold user provider:string uid:string name:string

Add to app/models/user.rb

  def self.create_with_omniauth(auth)
    create! do |user|
      user.provider = auth['provider']
      user.uid = auth['uid']
      if auth['info']
        user.name = auth['info']['name'] || ""
      end
    end
  end

With Rails 4 the recommended pattern to lock down model attributes that we don’t want changed from form submits (or malicious attacks) is in the controller. In app/controllers/users_controller.rb change:

    def user_params
      params.require(:user).permit(:provider, :uid, :name)
    end

to:

    def user_params
      params.require(:user).permit(:name)
    end

and then remove the corresponding fields from app/views/users/_form.html.erb

Finally, the real create action for the sessions controller, plus a destroy action for the /signout url we defined earlier:

  def create
    auth = request.env["omniauth.auth"]
    user = User.where(:provider => auth['provider'],
                      :uid => auth['uid']).first || User.create_with_omniauth(auth)
    session[:user_id] = user.id
    redirect_to user_path(user), :notice => "Signed in!"
  end

  def destroy
    reset_session
    redirect_to root_url
  end

With this app, we’ve got a basic understanding to Twitter OAuth using Rails 4 and the OmniAuth gem. We didn’t actually do anything specific to MongoDB and no testing yet. It is important to understand the technology we’re working with before testing or even writing production code.

Special thanks to Daniel Kehoe of RailsApps. His Rails 3 OmniAuth Mongoid tutorial provided a helpful foundation.

rspec: mixing transcations and truncation database clearner strategies

$
0
0

I’m using capybara-webkit for integration testing with rspec, which is awesome, because it is faster than other full-browser testing solutions, like Selenium, but it is slower than RackTest (default for RSpec testing). RSpec provides a nice way to specify an alternate driver when running Javascript tests, but configuration can be a little tricky. I got this working via this excellent blog post via Sarah Mei, who was pairing with me for the day.

I think it is important to actually understand the code that I copy/paste, so I took a little time to read up on the details which I’ve summarized below.

Favorite Testing Gems

I won’t elaborate on RSpec, the concepts in this post likely applies to test-unit as well. I written before about why RSpec is my favorite.

Rack::Test

When using RSpec in Rails, we use the rspec-rails gem, which configures a bunch of stuff that makes it easy to get started. By default, integration tests will use rack-test, a lovely little gem that supports methods like get, post, put and delete and handles the rack request and response objects. (via platformatec) It maintains cookies and follows re-directs, but is far from a full-browser, most notably, pages won’t execute Javascript. Rack::Test is quite lightweight, with tests running in the same process as your Rails code. Its speed is a huge advantage and worth retaining for your tests that don’t need more.

Capybara

Capybara is the world’s largest rodent, and the Ruby community is also the name of a favorite gem for testing the content of web pages. (A successor to Bryan Helnkamp‘s WebRAT, so named for Ruby Acceptance Testing, which instigated the rodent naming theme) Capybara is wonderful with its support for many “drivers” which allow for a consistent API across different solutions that offer different levels of browser support, with different performance characteristics.

Capybara::Webkit

Thoughtbot kindly created the capybara-webkit gem a few years ago, which I’ve found to be more reliable and performant than Selenium, and is my favorite choice for testing pages that need Javascript.

One of its creators, Joe Ferris, explains how it works (via stackoverflow)

  1. Capybara boots up your rack application using webrick or thin in a background thread.
  2. The main thread sets up the driver, providing the port the rack application is running on.
  3. Your tests ask the driver to interact with the application, which causes the fake web browser to perform requests against your application.

Database Cleaner

The DatabaseCleaner gem is super helpful for our typical Rails app that relies on a database. We always want a “clean slate” when we start our tests and this nifty gem gives us a bunch of options with a consistent interface for various database choices.

Configuration

To configure these solutions correctly, it is critical to understand that with Capybara::Webkit our target app code is running in a separate process from our tests. This means that when we set up our test data RSpec is running in one process and needs to actually write to the database, then our app code reads from the database from another process. Wheras with Rack::Test, the tests and the target code runs in the same process. That’s why we can’t use a “transaction” strategy to reset our test environment with Capybara::Webkit. Instead we use the “truncation” strategy, which simple blows away all of the data after each test run.

Why bother with transactions?

Truncation will work just as well with Rack::Test as transcations, so why introduce the complexity of two different configurations? The Database Cleaner README explains: “For the SQL libraries the fastest option will be to use :transaction as transactions are simply rolled back.” Sarah Mei elaborated on this by reminding me that the commit to the database is what takes the most time, and the transaction is never committed, it is simply rolled back at the end of your test. Transactions are pretty speedy, so we want to only use the truncation method when absolutely necessary.

Just Show me the Code

Here’s the configuration that was documented by Eric Saxby from Wanelo, which worked for me as well:

config.use_transactional_fixtures = true

config.before(:each, js: true) do
  self.use_transactional_fixtures = false
  ActiveRecord::Base.establish_connection
  DatabaseCleaner.strategy = :truncation
  DatabaseCleaner.start
end

config.after(:each, js: true) do
  DatabaseCleaner.clean
  ActiveRecord::Base.establish_connection
  self.use_transactional_fixtures = true
end

How does this work exactly?

We are set up to use transactions by default, which is built into rspec-rails and does not rely on DatabaseCleaner. Then, for our JS tests, we tell RSpec not to use transactions and instead instruct DatabaseCleaner to set up before each test runs with DatabaseCleaner.start and then clean up after with DatabaseCleaner.clean.

I have no idea why ActiveRecord::Base.establish_connection is needed, but if we don’t do that, then rake spec hangs after my first JS test with this ominous warning:

WARNING: there is already a transaction in progress

Perhaps someone reading this can explain this detail, but happy to have a configuration that works and hope this helps other folks who want fast tests that run reliably.

The post rspec: mixing transcations and truncation database clearner strategies appeared first on the evolving ultrasaurus.

ruby to find latitude/longitude for a list of cities

$
0
0

I have a relatively short list of cities which I want to plot on a world map. The list is a little too long for a manual lookup, but I don’t know exactly how I’ll use it in an app, so I figured out how to do it with some simple ruby in irb using the lovely geocoder gem.

I had my cities in a spreadsheet. Selected a few cells, and was able to simply paste into irb and split the string on newlines to get an array of the cities.


gem install geocoder
irb
>> require 'geocoder
>> a = "Honolulu, HI
>> Boston, MA
>> New York, NY".split("\n")
 => ["Honolulu, HI", "Boston, MA", "New York, NY"] 

>>a.map do |city| 
    d = Geocoder.search(city)
    ll = d[0].data["geometry"]["location"]
    puts "#{city}\t#{ll['lat']}\t#{ll['lng']}" 
end

Honolulu, HI    21.3069444  -157.8583333
Boston, MA  42.3584308  -71.0597732
New York, NY    40.7143528  -74.00597309999999

Then I could copy/paste the irb output back into my spreadsheet. Ta Da!

It appears that the free Google API has some kind of throttling, so this is only good for short lists of <20 cities.

The post ruby to find latitude/longitude for a list of cities appeared first on the evolving ultrasaurus.

jekyll dektop app with node-webkit on osx

$
0
0

I had an idea this morning that I could make a simple desktop app by combining the lightning-fast website generation capabilities of jekyll with the awesome Node-Webkit that lets us native wrappers for HTML5 apps. I checked out this nice intro to Node-Webkit, and unsurprisingly ran into a few gotchas, documented below for other adventurer and my future self:

Simple Website with Jekyll

gem install jekyll
jekyll new experiment
cd experiment
jekyll serve

go to http://localhost:4000
you should see the default sample jekyll site

Make a Native OSX App

Download and install Node-Webkit pre-built binary

At the root of your jekyll site, create a file named “package.json”

{
  "name" : "nwk-experiment",
  "window" : {
    "width" : 800,
    "height" : 600,
    "toolbar" : true
  },
  "main" : "app://whatever/index.html"
}

The app root url is a nice feature of node-webkit which makes it pretty easy to transport any website into this system of building a native app.

jekyll build  # creates the site
cd _site
zip -r ../app.nw * 
cd ..

Most tutorials tell you to zip the directory. The first time I tried, I got an Invalid package error “There is no ‘package.json’ in the package, please make sure the ‘package.json’ is in the root of the package.” On OSX, we need to zip the files from the root of the directory that had the ‘package.json file in it. (via crashtheuniverse)

Run the App

open -n -a node-webkit "./app.nw"

When I double-click on the app.nw file, I see the directory, not my index file. I haven’t figured out that part yet. Still a work in progress!

The post jekyll dektop app with node-webkit on osx appeared first on the evolving ultrasaurus.

time tracking with google calendar

$
0
0

I just created a very lightweight time tracking system where we can use a Google Calendar to track who does what when and then get a spreadsheet that shows all of the hours worked by individuals.

It goes with a spreadsheet where the first row has a header like this:
Date Meeting Sarah Paul Glen

Screen Shot 2014-07-27 at 6.47.14 PM

The we just make events on Google calendar and invite whoever is working together at that time — could be a team meeting or one of us working on our own. Time will be tracked by first name of email address
(paul@whatever.com and paul@yahoo.com would all get tracked in the “Paul” column). Currently the script doesn’t support two different people who share the same email name before ‘@’.

You just have to fill in your calendar id, which you can find if you choose Calendar Settings and scroll to the bottom of the first page (in Calendar Address section).

// add menu
function onOpen() {
  var ss = SpreadsheetApp.getActiveSpreadsheet();
  var menuEntries = [{name:"Calculate Hours", functionName: "calculateHours"}];
  ss.addMenu("Hours", menuEntries);
  // calcular al iniciar
  calculateHours();
}
 

function meetingSummary(cal_id){
  var hours = 0;
  var cal = CalendarApp.getCalendarById(cal_id);
  Logger.log(cal);
  Logger.log('The calendar is named "%s".', cal.getName());

  var this_year = new Date(2013,0,1);
  var now = new Date()
  var events = cal.getEvents(this_year, now);
  var results = []
  
  for ( i = 0 ; i < events.length ; i++){
    var event = events[i];
    title = event.getTitle();
    Logger.log(title);
    var start = event.getStartTime() ;
    var end =  event.getEndTime();
    start = new Date(start);
    end = new Date(end);
    hours = ( end - start ) / ( 1000 * 60 * 60 );
    guests = event.getGuestList();
    
    data = {meeting:title};
    for (g in guests) {
      var email = guests[g].getEmail();
      var name = email.split('@')[0];
      Logger.log(email, name);
      data[name] = hours;
    }
    data['date'] = start.toDateString();
    results.push(data)
  }
  
  Logger.log(results);
  return results;
}

function calculateHours(){
  Logger.clear();
  Logger.log("calculateHours");
  var cal_id = "put your cal id here";
  var ss = SpreadsheetApp.getActiveSpreadsheet();
  var s = ss.getSheets()[0];
  var headerValues = s.getRange("A1:F1").getValues();
  Logger.log(headerValues[0].length);
  var columns = {}
  var num_columns = headerValues[0].length;
  for (i=0; i < num_columns; i++) {
    columns[headerValues[0][i].toLowerCase()] = i+1;
  }
  Logger.log(columns);
  //{'glen':5, 'paul':4, 'description':6, 'meeting':2, 'sarah':3, 'date':1}
  var title_column = 2;
  
  // from second row
  var results = meetingSummary(cal_id);
  Logger.log("results.length="+results.length);

  for ( var row_number = 1; row_number < results.length+1 ; row_number ++){
    var result = results[row_number-1];
    Logger.log("row_number="+row_number);
    Logger.log(result);
    for (name in columns) {
      Logger.log("..."+name+"   "+result[name]);
      if (typeof result[name] == 'undefined') result[name] = "";
      s.getRange(row_number+1, columns[name]).setValue(result[name]);
    }
  }
}

The post time tracking with google calendar appeared first on the evolving ultrasaurus.


rspec 3 upgrade: conversion of should to expect

$
0
0

update: follow the rspec upgrade guide and try the transpec gem


Upgrading a Rails app to RSpec 3, I appreciated Jake Boxer’s guide. However, his regular expressions didn’t work in the editors I use, so I whipped up some command-line options that use the flexible unix utility sed.

If you want to actually understand the following scripts, I recommend Bruce Barnett’s sed tutorial and remember xargs is your friend.

Also, there are a few differences with sed on OSX, so these may not work on other platforms.

I didn’t attempt to handle multi-lined expressions, so I searched my spec directory for “lambda” in advance and changed those manually, and, of course, found a few other exceptions by running my tests afterwards.

Change “should ==” to “should eq”

You’ll need to change other operators also, but this was a pattern that happened to be very common in my code.

find . -type f -print0 | xargs -0 sed -i "" -E "s/^([[:space:]]+)(.*)\.should ==[:space]*(.*)[:space]*$/\1\2.should eq(\3)/g"

Change “whatever.should” to “expect(whatever).to”

This also works for should_not, since it is okay to say .to_not, even though I usually see people wrote .not_to

find . -type f -print0 | xargs -0 sed -i "" -E "s/^([[:space:]]+)(.*)\.should/\1expect(\2).to/g"

Also, check out: Cyrus Stoller’s upgrade tips

The post rspec 3 upgrade: conversion of should to expect appeared first on the evolving ultrasaurus.

simple test first express setup & tutorial

$
0
0

It has been surprisingly hard to find a very simple tutorial to get started with Express, along with some common helpful tools, including tests!

Here’s a little tutorial for Node.js v0.10 and Express 4. I’m learning Express, since I’m working on an app in SailsJS, so I will pick options that mirror choices made by the SailsJS framework.

Install Express

Express is a popular simple web app framework for Node (similar to Sinatra for Ruby), and is easily instally with the fabulous Node Package Manager, npm. I find the generators to be handy (at least for learning) and don’t ship with Express anymore, so you need to install them separately.

npm install -g express
npm install -g express-generator

Create an Express App

Let’s create an app named ‘test-app’ — this will create a new directory of that name with all the app files in it.

express test-app -e

The -e option tells express-generator to use ejs. (From the Express guide: Jade is the default. Express-generator supports only a few template engines, whereas Express itself supports virtually any template engine built for node. For the complete list, see express --help

It shows you all the files it creates and even gives a hint about next steps:

cd test-app
npm install

npm install will download all of the dependencies specified in our “package.json” file and put them in the the node_modules directory. This directory will get big fast, so we probably want to add to .gitignore.

Run the App!

Start the server

npm start

Then go to http://localhost:3000/ and see:
Browser with URL http://localhost:3000 shows Express in large letters, smaller letters below display "Welcome to Express"

Stop the server with ctrl-C.

Take a moment to review the contents of the generated package.json, the npm docs are a good reference for the defaults. All of the dependencies that we have right now are ones that express decided we should have. Max Ogden has some nice docs about Node modules.

Add a “devDependency” section to package.json:

  "devDependencies": {
    "mocha": "*",
    "chai": "*",
    "supertest": "*",
    "supervisor": "*"
   }

We’re adding a set of tools that are installed with npm but only used for development and testing.

Don’t forget to add a comma or we’ll get a scary looking error:

npm ERR! install Couldn't read dependencies
npm ERR! Failed to parse json
npm ERR! Unexpected string
...

Also in “package.json,” change

  "scripts": {
    "start": "node ./bin/www"
  },

to

  "scripts": {
    "start": "supervisor ./bin/www",
    "test": "./node_modules/.bin/mocha"
  },

the install the new packages with:

npm install

We’ve just added a set of development tools for rapid iteration and testing. The scripts section lets us create shortcuts for the npm command.

Supervisor Allows Fast Experimentation

Supervisor makes it so you can edit files and just refresh the page to see the change. Now that we have edited the npm ‘start’ script to use supervisor, we can:

npm start

We can view the main index page, by going to http://localhost:3000. Then without stopping the server, let’s edit

views/index.ejs

so the H1 text says “Hello!” instead of Express. We can refresh the page to see the update.

Mocha, Chai and Supertest for Testing

Mocha will serially run a set of tests and report failures nicely. It supports a number of different assertion libraries. Chai is a single assertion library that supports the popular variants: assert, expect and should.

You’ll need to create a test directory, empty for now. Let’s make sure we’re set up right:

mkdir test
npm test

We should see:

  0 passing (2ms)

create the file

test/index.test.js

with a test

var request = require('supertest')
  , express = require('express');
 
var app = require('../app');
 
describe('Index Page', function() {
  it("renders successfully", function(done) {
    request(app).get('/').expect(200, done);    
  })
})

run the test with

npm test

and it passes!

Adding New Behavior Test First

We can add expectations to our test. Let’s plan to add the text “Hello World to the index page. Supertest supports simple regex syntax for comparing text. The super test API cleverly supports concise testing by assuming a number as the first param is a status code, but a regex or a string wants to compare to the body.

  it("renders successfully", function(done) {
    request(app).get('/')
      .expect(200)
      .expect(/Hello World/, done);    
  })

This will fail

  1) Index Page renders successfully:
     Error: expected body '\n\n  \n    \n    \n  \n  \n    <h1>Hello!</h1>\n    

Welcome to Express

\n \n\n' to match /Hello World/

Now we can edit the page in

views/index.ejs

and run the test again with

npm test

to see it pass!

  Index Page
GET / 200 10ms - 211b
    ✓ renders successfully 


  1 passing (34ms)

The post simple test first express setup & tutorial appeared first on the evolving ultrasaurus.





Latest Images