Coding Part III: Design

Hi there!  Read Part I of my coding journey here, Part II here, and get the chrome extension here!

The dreaded “D” word (Design)

I am a terrible graphic designer.  I draw, paint, and sculpt and people always think because I “do” art I must be good at design.  They apply this to everything from graphic design to interior design.  However, these are totally different disciplines and I am a terrible designer.  So I won’t even pretend that I came up with some fancy design.  Instead, I googled around a whole bunch until I found a layout that I liked and copied it.

The design my tabs are based off of is from Jen Simmons’ blog.  I LOVED her big fat serif font so I found a similar free font from Google Open Fonts called Abril Fatface.  I used this for the names of the women.  For my body font, I picked a regular looking serif called Lora, also from Google.

Jen Simmon’s very helpfully had an example in her blog post that was exactly what I wanted my page to look like.  However, I wanted to use the new CSS grid layout system that had just launched (March 2017!).

I used her page as a visual template, but did my CSS from scratch.  I’m not very familiar with CSS.  I can never make floats and spans and whatever else do what I want them to do.  The Grid layout system is AMAZING.  It’s soooo easy to use and intuitive.  I was thanking my lucky stars this had just come out.

The key design decisions for me here were:

  1. I needed three layout options: 1. text-only, 2. text+image from Wikipedia, and 3. backup local image.
  2. I wanted a responsive page.

Below is what I came up with.  As you can see, the layout is heavily influenced by Jen Simmons’.

Here is the code for my styles.css layout!

Screen Shot 2017-04-20 at 2.53.39 PM  Screen Shot 2017-04-20 at 2.53.31 PM  Screen Shot 2017-04-20 at 2.53.21 PM.png

text and image from Wikipedia

Screen Shot 2017-04-20 at 2.57.29 PM  Screen Shot 2017-04-20 at 2.57.36 PM Screen Shot 2017-04-20 at 2.57.43 PM

text only from Wikipedia

Screen Shot 2017-04-20 at 3.00.52 PM.png

backup image from local directory

The code

Now that I knew what I wanted my page to look like, I had to code it.

My img variable was the key.  I had three options:

  1. Wikipedia did not return an img.  Img = “”
  2. Used a local image (no Wikipedia image or text).  Img = /imgs/X.jpg.  Img[0] is always “/”
  3. Wikipedia returned an image (and text).  All other options.

I used two classes in my CSS.  One for one column and one for two.  In my javascript I added an if statement to look to see which of these three options above was in my img variable.  Then I added the appropriate class to my html using jQuery.

if (img == "") {
   $(".tab").addClass("onecol");
   $("#imgdiv").remove();
 } else if (img[0] == "/") {
   $(document.body).css("background", "url(" + img + ") no-repeat center center fixed");
   $("#imgdiv").remove();
 } else {
   $(".tab").addClass("twocol");
   $("#img").attr("src", img);
 }

Trimming text length

I also did a few other checks to account for other funkiness that came up in testing.  For example, really long Wikipedia extracts would go off the page.  I didn’t like that the user needed to scroll to get to the “Read more on Wikipedia” link.  However, I couldn’t just put in a character limit because I didn’t like the idea of the summary getting cut off mid-sentence or mid-paragraph.  This was a bit of a pain, but I added a loop that counted up to 1200 characters to be displayed, and then when it reached the limit trimmed the extract to the nearest full paragraph.

I did this instead of just displaying the first or first and second paragraphs because paragraph length on Wikipedia is super inconsistent.  I didn’t want to just display the first paragraph since some women had a one sentence first paragraph and I wanted to capture more content from those cases.

// extract is the string from Wikipedia of the summary section. 
// It includes html tags.
if (extract.length > 1200) { 
  extractArray = extract.split(/(]*>)/);
  extract = "";
  var chars = 0;
  var stop = 0;
  // display up to 1200 chars of content - disregard html tags
  extractArray.forEach(function(item) {
  chars = chars + item.length;
  if (item == "" || item[0] == "<") {
    extract = extract + item;
    chars = chars - item.length;
  } else if (chars <= 1200 && stop == 0) {
    extract = extract + item;
    chars = chars + item.length;
  } else {
    stop = extractArray.indexOf(item);
    };
 });

Final thoughts & to dos

This was a really great learning experience.  I used jQuery, html, CSS Grids and lots of other new skills.  The whole thing is far from perfect, I wish I could detect faces and dynamically position the images so this wouldn’t happen:

Screen Shot 2017-04-20 at 4.53.39 PM.png

Since most of the images are portrait, when the page scales down to a single column and the image area is landscape, photos can get cut off in weird places.

I also didn’t make the local file option responsive.

I would like a dynamic way to have the list of women.  That way I wouldn’t need to update the files in the chrome store if I want to add or change any names.

I have many other “I wish” and “Next time”‘s, but for now, that’s pretty much it!

Here is the complete code for my chrome extension!  You can install the extension from the chrome store here.

Thanks for reading!

Other random things I learned:

  • If you don’t specify the data type of the result you want from your .get request, jQuery will try to guess it and it might not be right.  I was banging my head against the wall trying to figure out why I was getting back a string and then I read this:
    • The type of data that you’re expecting back from the server. If none is specified, jQuery will try to infer it based on the MIME type of the response (an XML MIME type will yield XML, in 1.4 JSON will yield a JavaScript object, in 1.4 script will execute the script, and anything else will be returned as a string). source
  • I tried using the wikiblurb library since CS50 is all about standing on the shoulders of others and using good libraries.  I really liked that they had the links in the Wikipedia blurbs they got back, but I wanted more customization and it seemed like a bigger pain to customize around their library than to not use it.
  • I looked at stars.chromeexperiments.com to try to see how they used wikipedia extracts.  However, the blurb wasn’t matching up to what I was seeing in wikipedia, so I’m guessing that the blurb wasn’t getting pulled dynamically.  When I dug up the code (can’t seem to get to it again!) it looked like the blurb data was being held in a file somewhere as opposed to coming from a json request to the wikipedia API.  See screenshot below of the difference.

Screen Shot 2017-04-17 at 5.40.45 PM.png

  • Object-fit was key to replicating Jen Simmons’ image behavior (filling it’s portion of the screen).
  • Some useful documentation of named grid areas.  This was very helpful in using CSS grids to do my layout.
Advertisements