Coding Part III: Design

Hi there!  Read Part I of my coding journey here, Part II here, and get the chrome extension here!

The dreaded “D” word (Design)

I am a terrible graphic designer.  I draw, paint, and sculpt and people always think because I “do” art I must be good at design.  They apply this to everything from graphic design to interior design.  However, these are totally different disciplines and I am a terrible designer.  So I won’t even pretend that I came up with some fancy design.  Instead, I googled around a whole bunch until I found a layout that I liked and copied it.

The design my tabs are based off of is from Jen Simmons’ blog.  I LOVED her big fat serif font so I found a similar free font from Google Open Fonts called Abril Fatface.  I used this for the names of the women.  For my body font, I picked a regular looking serif called Lora, also from Google.

Jen Simmon’s very helpfully had an example in her blog post that was exactly what I wanted my page to look like.  However, I wanted to use the new CSS grid layout system that had just launched (March 2017!).

I used her page as a visual template, but did my CSS from scratch.  I’m not very familiar with CSS.  I can never make floats and spans and whatever else do what I want them to do.  The Grid layout system is AMAZING.  It’s soooo easy to use and intuitive.  I was thanking my lucky stars this had just come out.

The key design decisions for me here were:

  1. I needed three layout options: 1. text-only, 2. text+image from Wikipedia, and 3. backup local image.
  2. I wanted a responsive page.

Below is what I came up with.  As you can see, the layout is heavily influenced by Jen Simmons’.

Here is the code for my styles.css layout!

Screen Shot 2017-04-20 at 2.53.39 PM  Screen Shot 2017-04-20 at 2.53.31 PM  Screen Shot 2017-04-20 at 2.53.21 PM.png

text and image from Wikipedia

Screen Shot 2017-04-20 at 2.57.29 PM  Screen Shot 2017-04-20 at 2.57.36 PM Screen Shot 2017-04-20 at 2.57.43 PM

text only from Wikipedia

Screen Shot 2017-04-20 at 3.00.52 PM.png

backup image from local directory

The code

Now that I knew what I wanted my page to look like, I had to code it.

My img variable was the key.  I had three options:

  1. Wikipedia did not return an img.  Img = “”
  2. Used a local image (no Wikipedia image or text).  Img = /imgs/X.jpg.  Img[0] is always “/”
  3. Wikipedia returned an image (and text).  All other options.

I used two classes in my CSS.  One for one column and one for two.  In my javascript I added an if statement to look to see which of these three options above was in my img variable.  Then I added the appropriate class to my html using jQuery.

if (img == "") {
   $(".tab").addClass("onecol");
   $("#imgdiv").remove();
 } else if (img[0] == "/") {
   $(document.body).css("background", "url(" + img + ") no-repeat center center fixed");
   $("#imgdiv").remove();
 } else {
   $(".tab").addClass("twocol");
   $("#img").attr("src", img);
 }

Trimming text length

I also did a few other checks to account for other funkiness that came up in testing.  For example, really long Wikipedia extracts would go off the page.  I didn’t like that the user needed to scroll to get to the “Read more on Wikipedia” link.  However, I couldn’t just put in a character limit because I didn’t like the idea of the summary getting cut off mid-sentence or mid-paragraph.  This was a bit of a pain, but I added a loop that counted up to 1200 characters to be displayed, and then when it reached the limit trimmed the extract to the nearest full paragraph.

I did this instead of just displaying the first or first and second paragraphs because paragraph length on Wikipedia is super inconsistent.  I didn’t want to just display the first paragraph since some women had a one sentence first paragraph and I wanted to capture more content from those cases.

// extract is the string from Wikipedia of the summary section. 
// It includes html tags.
if (extract.length > 1200) { 
  extractArray = extract.split(/(]*>)/);
  extract = "";
  var chars = 0;
  var stop = 0;
  // display up to 1200 chars of content - disregard html tags
  extractArray.forEach(function(item) {
  chars = chars + item.length;
  if (item == "" || item[0] == "<") {
    extract = extract + item;
    chars = chars - item.length;
  } else if (chars <= 1200 && stop == 0) {
    extract = extract + item;
    chars = chars + item.length;
  } else {
    stop = extractArray.indexOf(item);
    };
 });

Final thoughts & to dos

This was a really great learning experience.  I used jQuery, html, CSS Grids and lots of other new skills.  The whole thing is far from perfect, I wish I could detect faces and dynamically position the images so this wouldn’t happen:

Screen Shot 2017-04-20 at 4.53.39 PM.png

Since most of the images are portrait, when the page scales down to a single column and the image area is landscape, photos can get cut off in weird places.

I also didn’t make the local file option responsive.

I would like a dynamic way to have the list of women.  That way I wouldn’t need to update the files in the chrome store if I want to add or change any names.

I have many other “I wish” and “Next time”‘s, but for now, that’s pretty much it!

Here is the complete code for my chrome extension!  You can install the extension from the chrome store here.

Thanks for reading!

Other random things I learned:

  • If you don’t specify the data type of the result you want from your .get request, jQuery will try to guess it and it might not be right.  I was banging my head against the wall trying to figure out why I was getting back a string and then I read this:
    • The type of data that you’re expecting back from the server. If none is specified, jQuery will try to infer it based on the MIME type of the response (an XML MIME type will yield XML, in 1.4 JSON will yield a JavaScript object, in 1.4 script will execute the script, and anything else will be returned as a string). source
  • I tried using the wikiblurb library since CS50 is all about standing on the shoulders of others and using good libraries.  I really liked that they had the links in the Wikipedia blurbs they got back, but I wanted more customization and it seemed like a bigger pain to customize around their library than to not use it.
  • I looked at stars.chromeexperiments.com to try to see how they used wikipedia extracts.  However, the blurb wasn’t matching up to what I was seeing in wikipedia, so I’m guessing that the blurb wasn’t getting pulled dynamically.  When I dug up the code (can’t seem to get to it again!) it looked like the blurb data was being held in a file somewhere as opposed to coming from a json request to the wikipedia API.  See screenshot below of the difference.

Screen Shot 2017-04-17 at 5.40.45 PM.png

  • Object-fit was key to replicating Jen Simmons’ image behavior (filling it’s portion of the screen).
  • Some useful documentation of named grid areas.  This was very helpful in using CSS grids to do my layout.
Advertisements

Coding Part II: Radfems

 

Hi there!  Read Part I of my coding journey here and get the chrome extension here!

Part of what reignited my desire to code was a project I did in December 2016 called He to She.  It was a simple chrome extension that changed instances of masculine pronouns (he, his, him) to feminine ones (she, hers, her).  It was pretty janky (code is here) and was largely written with the help of tutorials.

But it was SO much fun to write!

And I wanted to do more.

For my next project, I wanted to create a chrome extension that brought up a new woman in STEM with each new tab.  There are plenty of new tab extensions out there and I figured it should be within my realm of understanding.

Nope!

It was way beyond my capabilities.  So I headed back to the drawing board with CS50.  This time, I was determined to finish.  I had projects to create!

Granted, it wasn’t a straight shot.  I got distracted and built a shelf and a robot, but I got through the course in the end.

So there I was with a blank sublime text page, an idea for a project, and no idea how to start.

Sketching out the plan

I began by sketching out the user experience of my project:

  1. User opens a new tab.  A page with a random woman to know in STEM appears.  There is a picture, her name, and a short bio pulled from Wikipedia.  At the bottom of the page, there is a link to read more on Wikipedia.

Then, I broke down the actions I would need to figure out:

  1. Get the name of a woman in STEM
    1. I decided to put a pre-determined list of names in a txt file since I couldn’t find a good RSS feed or other such list of women in STEM
  2. Get info on the woman from Wikipedia
  3. Get picture of the woman from Wikipedia
  4. Add Wikipedia information to the new tab page
  5. Add some sort of backup option if the user is offline or Wikipedia doesn’t work
  6. Format the page so it looks pretty!

Using the Wikipedia API

I decided to tackle the info from Wikipedia first.  This was, in my mind, the meat of the program and so it was important to get it working.

To do this, I knew I would need to get a JSON object from Wikipedia.  JSON (JavaScript Object Notation) is basically an easily readable bundle of information from a website.

To get a JSON object, you need to request it.  One way to do this is to use JQuery to request a JSON object from an API.

Side note: We had used JQuery in CS50, so I understood the basic concepts, but I hadn’t sat down and read the documentation.  Like I said in Part I, I like getting context then going back for the foundation.  For me, it’s much easier to write a program using JQuery then go back and read the documentation than to read the documentation top to bottom and sit down to write a program.

CS50 is set up in to facilitate that style of learning.  They give you serious training wheels so you don’t really need to understand what you’re writing.  It’s up to you to follow up and read documentation to truly understand what you’re doing.

Back to my project: I started looking into whether Wikipedia had an API and found this page.  I also found this page about parsing JSON responses from JQuery and this one about Wikipedia extracts.  I used the URL from the Wikipedia extract page and some of the code from the parsing JSON page to put together a first working version of my javascript code.  A screenshot of my first successful grab of Wikipedia info below!

Screen Shot 2017-04-14 at 9.59.41 AM.png

link to v1 js code on Github

Seeing the words appear on the page was IMMENSELY satisfying, but I was a long way from a launch ready extension.

Making my JSON request dynamic

My next step to make my JSON request dynamic.  In v1, I had hard-coded in a single URL that retrieved the summary info for the wikipedia page for “Weather.”  I looked around for how I might build the wikipedia request and had an a-ha moment after re-reading the JQuery documentation on .getJSON.

I realized that “Data that is sent to the server is appended to the URL as a query string.”  This meant I could pass a number of parameters into “data” as an object, and the output would be url.com?A=B&X=Y for the object {A: B, X: Y}.

So I looked again at my wikipedia URL and translated the query string into a data object like so:

https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=Weather&redirects=1
{ "format" : "json",
  "action" : "query"
  // etc etc etc
}

You’ll notice that the empty parameters like exintro=& are linked to empty strings in my data object like this –> {“exintro” : “”}

I also (later on) read on the Wikipedia API documentation page that you can (and should) pass multiple variables to the same parameter using the pipe symbol “|”.

For example, prop=extracts&prop=pageimages became:

{ "prop" : "extracts|pageimages" }

My final query looked like this:

 var wikiAPI = "https://en.wikipedia.org/w/api.php?";
 var test = $.getJSON( wikiAPI, {
 format: "json",
 action: "query",
 prop: "extracts|pageimages",
 exintro: "",        // this is to get just the intro paragraph
 titles: fem,        // variable with name of the random woman to lookup
 piprop: "original", // this is to get the image
 redirects: "true"   // this returns the final page after any redirects

Loading in my ladies

Now that that was done I had to figure out how to load in a list of ladies, and retrieve a random lady to look up.  In CS50, we had used separate text files to hold long lists so I decided that’s what I would do.  I created a .txt file and manually typed in a list of women in STEM.  (The list is basically a mashup of various buzzfeed/wikipedia/other lists, with some women who’s bios are super short removed, so suggestions/edits welcome!).

To do this, I would need to use a GET request.  This is exactly what I had done with the Wikipedia request, so this would be easy!  Also, since my file was local and I didn’t need to send any additional parameters, my GET request was VERY straightforward.  Just $.get(filename).  See below for my code:

 $.get("radfems.txt")
   .done(function(femlist) {
   var femlist = femlist.split(/[\n,]+/);
   console.log("getFemList success");
   randomFem(femlist, callback);
   })
   .fail(function() {
   console.log("getFemList error");
   })
   .always(function() {
   console.log("getFemList complete");
   })

You’ll see there are no additional parameters passed to my GET request outside of the filename.

The next piece was to parse the returned object.  Basically, I got back a big string, and I wanted to split it into an array of names.  Unfortunately, I had formatted my text file so that each woman was separated by both a comma and a newline.  I didn’t want to go back and reformat, so I decided to use a regular expression (regex) to match the comma+newline pattern.

Using an online regex tester, I tested out my regex and got back an array of my women’s names.  Success!

Retrieving a name

Now that I had my list of women loaded in, I needed a way to get a random name to then lookup in Wikipedia.  This part was very straightforward.  I used the built in Math.random() utility to generate a random number, and then used that number as the index to get a woman from my array.

var random = Math.floor((Math.random() * femlist.length));
 var fem = femlist[random];
 while (fem == "") {
 fem = femlist[random];
 };
 getFemInfo(fem, callback);

Local/backup image

The last piece of the puzzle was having some sort of backup image or text for when the user was offline.  I googled around and found this women in STEM bio series jewelry designer aubergdesigns had created.  I pulled a few of these images to use as my backups.

I put these images into their own “imgs” folder, and then linked to a random image in that local folder whenever the Wikipedia request failed.

 var test = $.getJSON( wikiAPI, {
    .... // this part is shown above...trying to keep it concise!
 })

 .done(function(test) {
   .... // this part is really long, see github for code
.fail(function() {
 console.log("getFemInfo error");
 getLocalFem(callback);
 })

// LOCAL FILE RETRIEVAL FUNCTION
var getLocalFem = function(callback) {
 console.log("getLocalFem");
 var random = Math.floor((Math.random() * 3));

 var img = "/imgs/" + random + ".jpg";
 callback("", "", img, "");
}

Callback functions

Up until this point, I had been using a global object container (more here) to store a number of variables – such as the name of the woman I was looking up and the Wikipedia page ID – that I wanted to access from several functions.

However, I had read that it’s generally discouraged to use global variables.  I think using a global object container is less discouraged, but still I wanted to see if there was another way.

Much googling led me to the concept of callback functions.  I wanted certain functions not to execute until the data they needed was ready, so I thought it made sense to use callback functions.  Essentially, a callback function is a function that is passed to another function as an argument.  The callback function is then called/executed WITHIN the other function.

I like examples so I googled around some more and found this code for displaying a quote in each new tab which helped me to understand how callbacks are used in the wild.  (I often find it helpful to look at longer pieces of code in Github, as examples in documentation are often really short snippets and I find it helpful to see more context.)

I then structured my code similarly to the code from csoni111.

  1. updateFem: I called a main updateFem() function once my document (the new tab) loaded.  That function contained the code to actually place my content on the page.
  2. getFemList: In order to be able to place content, I needed to get content.  I passed the function for placing content (my callback) to the next function which loaded the list of fems from my text file.
  3. randomFem: Once the list was loaded, I passed my callback (function to place content) onto the next function – the one that got a random name.
  4. getFemInfo: After I had a random name, I passed my callback over to the next function to get information from Wikipedia.
    1. callback: If I was able to successfully retrieve info from Wikipedia, I passed the info to the callback function to execute.
    2. getLocalFem –> callback: If I was unsuccessful (e.g. if I was offline when I opened a new tab), I retrieved a backup image from a local directory and passed this to the callback function to execute.

The callback was the very last function I wanted to execute.  I thought of it as an empty box with a bunch of gears inside that was getting handed from function to function.  Once all the other functions had executed, the final function handed all the necessary parts to my callback function and told the callback function to execute.

So that was the info retrieval side of things.  Read on to Part III to learn about how I displayed my content!

Coding & learning with context

Hello, world!

I’m incredibly proud of this next project.  It captures perfectly the goal of Zeynep and my quitting quitting project.

I started taking CS50x in December of 2015.  At the time, I thought I could blow through the class in a month if I plugged away at it for 10 hours a day.  I had big plans of taking additional math and CS classes after CS50x and possibly going back to school for a master’s in CS.  What, a master’s in CS?!?  But you majored in Art and International Relations!  Yes, dear reader.  I had the exact same thought.

So then, why a master’s in CS?  I wanted a career change.  I hated my job (project management) and I wanted to make things.  I had taken a Udacity Intro to Python course back in 2011 and loved it.  But I didn’t really know where to go from there.  I ended up writing a bunch of Google Apps Scripts since they were easy to get started with, and useful for my job.  But I wasn’t really learning much else.  I felt I had hit a wall and wanted to be back in a school-like environment.

I spent HOURS reading through reddit and quora blogs about going back for a second bachelor’s in CS, or going to a coding bootcamp.  The consensus seemed to be a second bachelor’s was a waste of money and a bootcamp wasn’t rigorous enough.  Instead, most people suggested taking the core classes on your own and applying directly for a master’s program.  So that’s what I was doing, or so I thought.

My biggest problem, at least in the last few years, has been split attention.  There are SO MANY projects I want to do and not nearly enough time to do them all.  Instead of picking one things and diving into it, I would start down one path, get frustrated that I wasn’t making progress fast enough, and then switch to something else.  The end result was that I got nothing done.

The problem with this approach was that I was a beginner at a lot of the things I wanted to do.  It’s like wanting to play Beethoven’s Symphony #5 on the piano, but not knowing how to play the piano.  You have two options.

  1. Watch a bunch of YouTube tutorials of people playing Symphony #5 and copy their finger movements exactly.  Eventually, you’ll be able to play that one piano piece, but nothing else.  You won’t have really learned to play the piano.  HOWEVER, with every note you hit you could hear Symphony #5 somewhere in there, struggling to get out which kept up your enthusiasm.  Also, you will have learned a LOT about piano playing (though you may not realize it) which will serve as a good foundation to learn to play the piano later.
  2. Start with the basics (reading musical notes, practicing scales) and work your way up to playing Symphony #5.  You’ll actually learn to play the piano, but during the learning process you won’t be playing Symphony #5.  That means you’ll need to keep reminding yourself what all that boring scale playing is leading to.  However, you’ve built a strong foundation about music playing that you can apply beyond Symphony #5.

That’s what it was like for me with programming.  The Udacity course was like starting with method #2, but stopping with scales.  After the course was done, I had no idea how to apply anything I learned to the projects I wanted to create.  I didn’t know how to go from playing scales to playing a real piece of music.

So I shifted to method #1.  I dove into a bunch of Apps Scripts projects, frankenstein-ing programs together using other people’s code, tutorials, and the Apps Scripts documentation.  I didn’t really get half the errors I was seeing.  I didn’t really understand why things worked when they did, but I could generally get things working.

However, this was deeply unsatisfying.  Which is why I came to CS50.  I wanted to be back in a school-like environment.  To learn things the “right way.”

Taking CS50 after having spent two years frankenstein-ing code was GREAT.  Going with method #1 gave me context so that when I saw lessons in CS50, I could think “Oooooohhh, THAT’S why that works” instead of thinking, “Ok, I get the theory now how do I apply it?”  I had already applied it and now I was backing into the theory.

But back to December 2014.  Taking CS50 was great, but it was hard.  I was itching to get started on projects and the thought of having to take 11 weeks of courses was overwhelming.  I got frustrated that I wasn’t making progress fast enough and quit.

This is totally counterintuitive and I know it, but I still do it all the time.  I think it’s going to take too long to learn something, so I quit.  But then 11 weeks later I’m sitting there thinking, why didn’t I just invest in this 11 weeks ago?  The problem?  Learning scales is boooring and I wanted to play the Symphony NOW!

Anyway, I quit the course in Jan 2015.  There was the issue that I wasn’t progressing fast enough coupled with my life getting busy.  I was also taking a sci-fi writing course at the time which I really threw myself into.  It was my 10 hour a day project.  On top of that, I was applying to new jobs.  I started a new job in April 2015 and just got distracted getting up to speed and working hard at the new gig.

But the new job was still in project management so, inevitably, I begin to feel the itch.  I wanted to *make* things.  So I picked the course back up Dec 2016 and really got to it in Feb/March 2017.  I finished the course last week (April 2017) and it was HARD.

I admit there were a few additional stop and starts in there, though not quite as long, and I skipped two problem sets, but I got through it!

My next post is about my final project!  Read on here…