Learning to learn (more!)

Hello everyone!

In a previous post I talked about how I’ve started looking into new learning techniques to help me on my learning quest.  I’m currently teaching myself math and programming, and it’s been tough.  Progress comes in fits and starts, and I’ve been down several dead end paths.

  1. I tried a purely project based approach.  I sat down and thought of interesting projects I’d like to make and then jumped into coding to make them.  That was a good way to get a taste of what programming is about and to confirm that it’s something I’d like to do more of.  However, the projects I was dreaming up were waay out of the scope of my beginner’s ability.  I could have scoured github for similar projects and mashed together a frankenstein project that more-or-less did what I wanted it to do, but I didn’t think this would be a good way to actually learn the concepts.  I didn’t think this would help me to generalize skills and learn to do more interesting work in the long run.
  2. So I turned to classes.  I signed up for some advanced online classes through Udacity and EdX on machine learning, robots, and AI.  I started programming in February 2017 so it’s no wonder that by April I was NOT ready to take these types of classes.  I muddled through most of a Udacity course (they give you generous starter code) and got halfway through the EdX course before I hit a wall.  Again, I didn’t think I was really getting the concepts.  I could hack together a project that would spit out the right answer, but I didn’t really get what I was doing.
  3. So I took a step back.  I started taking linear algebra and then realized I needed to go back further, and started Calculus.

Learning Calculus

For the past two weeks, I’ve been teaching myself Calculus using the amazing resources from MIT OCW and Professor Paul Dawkins’ online notes.  I also bought a big book of calculus problems for additional practice.  The first week was pretty good.  I was chugging along and felt I was making good practice.

This past week has been less than great.  On Wednesday when I sat down to do practice problems, I got every single problem I tried before lunch wrong.  That means I spent four hours bashing away ineffectively at problems and feeling more frustrated and despondent as the minutes ticked by.  I had been unwilling to move on from the work I was doing (applications of derivatives) to new material (integrals) because I wanted to master the first thing first.  But it was clear this wasn’t working.  I moved on, and found a groove again with integrals.

But it was clear that I was missing something.  I wasn’t learning effectively.  Something just wasn’t clicking and I wasn’t sure what.  I had done calculus in HS and did well, I had done problems the day before and gotten them right.  Why suddenly did it feel like my brain was mud?

I was reading through Professor Dawkins’ post on how to study math and it was obvious to me that I was in category 2 of students who don’t do well in calculus.  I was studying for hours each day but not doing well on my problem sets.  It was clear to me that I had inefficient study habits and unless something changed, I was just going to end up wasting more time.

Around the same time, I stumbled across this gihub community of Open Source Computer Science learners.  And from there, I found the subreddit for the group, which led me finally to this QA mysteriously and intriguingly titled “looking for alternatives.”

And there, I found this amazing resource for a self-learning CS curriculum.  What I love about this list is that it has a bunch of helpful resources for laying the groundwork for your self learning endeavor.

I don’t plan to go through this whole curriculum, but I did start with the learning to learn course on coursera and it’s AMAZING.

There are some things I knew or practiced when I was in school, but this time around because I’m older and feeling pressure to see results faster, I haven’t been doing, to my own detriment.  Some of the key points:

On learning/chunking

  • Chunking is the idea of grouping together related ideas/concepts in order to improve learning.  If we are memorizing a song we chunk the tune and the lyrics which make it easier to remember both.
  • When we learn something new, we lay down new neural pathways for the material.  We need to strengthen those neural pathways in order to truly understand something.
  • To strengthen neural pathways, it’s better to learn the material over time.  If we study for one hour a day for five days instead of five hours in one day we’re more likely to remember the material and to understand it more deeply.
  • It’s best to work in small chunks of time.  For example, do 25 min of focused work, then take a break, then 25 min more etc.  It’s also helpful to review material right before bed as we commit things to long term memory while we sleep.
  • An easy technique to improve retention and learning is to try to write down the key points that you learned right after learning them (without notes/looking – which is what I’m trying to do right now!)

On procrastination

  • Focus on process instead of product to beat procrastination.  Instead of thinking, I’m going to finish those five homework problems think, I’m going to work on my homework for 25 min.
  • Every night before bed, write down the tasks you plan to accomplish the next day.  Don’t go too crazy!  5-6 tasks is more than enough.  Keep them focused on process.
  • Keep all this in a journal and take note of what worked, what didn’t, and how long things actually take.  Over time you’ll get a better feel for what you can accomplish in that time.
  • Plan your quitting time.  It’s important to pace yourself and it’s also not effective to just keep on studying past a certain point.  You won’t learn more or better this way.
  • Procrastination starts with a cue, try to change that cue.  For example, if your cue to procrastinate is hearing the ping of a new email, turn off your phone.  Removing the cue will make it easier to avoid procrastinating.
  • Reward yourself for completing tasks.  Rewards can be emotional (I draw a smiley face and write ‘yay’! on my paper when I finish something) or external.
  • Lastly, believe that you can change.  Belief that you can break the cycle of procrastination is important!

On memory

  • Humans have good spatial memory.  Use this to your advantage by building a memory palace
  • The weirder or funnier your mnemonic devices, the better you’ll be able to remember the information.
  • It might seem silly, but these devices can help you as you’re starting to form memories.  Over time, this will help strength those neural pathways!

Eep!  And that’s all I have for right now.  I’m sure I’m forgetting a lot.  :/  Will need to review tomorrow (as per the suggested way of learning!)

Changing my tactics

All of this is to say that I have a lot of bad habits to unlearn and new habits to form.  Instead of going the Scott Young route and trying to cram a whole bunch of learning (ie a semester of Calculus into one week), I’m going to spread things out a bit more.

Starting this week, I’m going to concurrently do my algorithms, calculus, and linear algebra coursework.  I plan to spend ~1 hour in the morning reviewing the material, and then dedicate the afternoon to practice problems or other study techniques (e.g. making flashcards, building my memory palace etc.  🙂

I’m not sure how this will go, but my rough goals are:

  • Finish all three courses by the end of August
  • Be comfortable with applications of Calculus and Linear Algebra
  • Be able to write the pseudocode for all the algorithms covered in the course
  • Be able to analyze running time of algorithms (which is an application of calculus, I believe, so….two in one!)

Having fun!

Lastly, while math and coding is fun, it’s important to give my brain a break and do something I enjoy!

I LOVE puzzles so another book I picked up is The Art and Craft of Problem Solving.  It’s aimed at HS students (and teachers) who are interested in the math olympiads.  While I’m definitely not in the right age group for that, it has a bunch of fun brain teaser math problems like the classic census taker problem.

A census-taker knocks on a door, and asks the woman inside 
how many children she has and how old they are. 
"I have three daughters, their ages are whole numbers, 
and the product of the ages is 36," says the mother. 

"That's not enough information," responds the census-taker. 
"I'd tell you the sum of their ages, but you 'd still be stumped." 
"I wish you 'd tell me something more." 
"Okay, my oldest daughter Annie likes dogs." 

What are the ages of the three daughters?

Enjoy!

 

Asking for help

** If anyone is interested in trying Wyzant (reviewed below), please use my code and get $40 of tutoring for free!**

Sometimes going it alone is great.  And sometimes you need help.

I’ve been teaching myself to code and backtracking to more and more basic subjects to give myself a solid foundation in programming.

I’m still not 100% sure what I’d like to specialize in, but I’m leaning towards data science and/or AI.  (Will write more on my love of cyborgs and why AI later…)

But I have to start at the basics.  For now,  I’m doing simple toy programs (tictactoeAI) and working through several Coursera courses on algorithms.

While the code I write works, I just know it’s ugly.  It feels brittle and ungainly and while I wanted to fix it, I didn’t know how.  One of my biggest fears about being a self-taught programmer is that my code might work, but it’ll be a hot mess and I’ll have locked in bad habits.

If I were in school, I’d be getting design feedback, but going it alone that’s hard to get.  I’ve been submitting code snippets to codereview stackexchange and while that’s been hugely helpful it doesn’t go far enough.

I really needed someone to sit down with me, walk through my code line by line, and give me feedback.

Looking into tutoring options

After a week or two of fretting over what to do I finally decided to try an online tutor.

Ideally, you could ask a friend or coworker for help, but I don’t have any close friends that code and felt bad asking someone I didn’t know very well for such a big favor.  Since I’m a beginner, I need a lot of patient explanation and I thought that was just too much to ask someone who wasn’t a close friend or family member.

I looked into various meetup options, but ruled them out for much the same reason. The ones I found were group study sessions where peers helped one another – basically stackexchange IRL.

Online tutoring with Wyzant

So I took a chance in a tutor on Wyzant. I was worried about the cost ($40/hr for the tutor I chose), but since the first session is free I decided to try it out.

I met my tutor for the first time this afternoon.  She was amazing!  It’s incredibly helpful to have someone who knows what they’re doing walk with you through your code.  I’ve done crit sessions with writing and art before, and I would put codereview closer to the crit side of the spectrum vs. the tutoring side.

We discussion some foundational CS concepts like layers of abstraction, scope, and the stack (and how/when it gets cleared out).  She clarified a lot of concepts I had read about and vaguely understood.  She also pointed out some problems with my code that I never would have noticed on my own.

For example, in my tictactoe AI when the user plays more than one game, I create a new instance of the board/AI objects for each new play.  She told me that if a user were to play a whole bunch of games in a row, that my current setup would lead to stack overflow issues.  This isn’t something I ever would have caught since I never played more than one or two games in testing.

All in all, it was great to be able to ask all the why questions that I never get to ask on online forums or when asking a quick question to a friend.  When she first pointed out the issue with my tictactoe game, instead of just saying ‘ok, got it’ and fixing it, I asked why it was a problem.  That led to a 10 min tangent on stacks and scopes, but it clarified the concepts I needed to know.

All in all, it was time well spent.  I’m reworking the programs we looked at together now and am planning on making our sessions a regular (weekly?) occurrence.  I figure $40 a week on tutoring is still waaaay less $$ than school and a worthwhile investment to learn to write legible and strong code!

Coding Part III: Design

Hi there!  Read Part I of my coding journey here, Part II here, and get the chrome extension here!

The dreaded “D” word (Design)

I am a terrible graphic designer.  I draw, paint, and sculpt and people always think because I “do” art I must be good at design.  They apply this to everything from graphic design to interior design.  However, these are totally different disciplines and I am a terrible designer.  So I won’t even pretend that I came up with some fancy design.  Instead, I googled around a whole bunch until I found a layout that I liked and copied it.

The design my tabs are based off of is from Jen Simmons’ blog.  I LOVED her big fat serif font so I found a similar free font from Google Open Fonts called Abril Fatface.  I used this for the names of the women.  For my body font, I picked a regular looking serif called Lora, also from Google.

Jen Simmon’s very helpfully had an example in her blog post that was exactly what I wanted my page to look like.  However, I wanted to use the new CSS grid layout system that had just launched (March 2017!).

I used her page as a visual template, but did my CSS from scratch.  I’m not very familiar with CSS.  I can never make floats and spans and whatever else do what I want them to do.  The Grid layout system is AMAZING.  It’s soooo easy to use and intuitive.  I was thanking my lucky stars this had just come out.

The key design decisions for me here were:

  1. I needed three layout options: 1. text-only, 2. text+image from Wikipedia, and 3. backup local image.
  2. I wanted a responsive page.

Below is what I came up with.  As you can see, the layout is heavily influenced by Jen Simmons’.

Here is the code for my styles.css layout!

Screen Shot 2017-04-20 at 2.53.39 PM  Screen Shot 2017-04-20 at 2.53.31 PM  Screen Shot 2017-04-20 at 2.53.21 PM.png

text and image from Wikipedia

Screen Shot 2017-04-20 at 2.57.29 PM  Screen Shot 2017-04-20 at 2.57.36 PM Screen Shot 2017-04-20 at 2.57.43 PM

text only from Wikipedia

Screen Shot 2017-04-20 at 3.00.52 PM.png

backup image from local directory

The code

Now that I knew what I wanted my page to look like, I had to code it.

My img variable was the key.  I had three options:

  1. Wikipedia did not return an img.  Img = “”
  2. Used a local image (no Wikipedia image or text).  Img = /imgs/X.jpg.  Img[0] is always “/”
  3. Wikipedia returned an image (and text).  All other options.

I used two classes in my CSS.  One for one column and one for two.  In my javascript I added an if statement to look to see which of these three options above was in my img variable.  Then I added the appropriate class to my html using jQuery.

if (img == "") {
   $(".tab").addClass("onecol");
   $("#imgdiv").remove();
 } else if (img[0] == "/") {
   $(document.body).css("background", "url(" + img + ") no-repeat center center fixed");
   $("#imgdiv").remove();
 } else {
   $(".tab").addClass("twocol");
   $("#img").attr("src", img);
 }

Trimming text length

I also did a few other checks to account for other funkiness that came up in testing.  For example, really long Wikipedia extracts would go off the page.  I didn’t like that the user needed to scroll to get to the “Read more on Wikipedia” link.  However, I couldn’t just put in a character limit because I didn’t like the idea of the summary getting cut off mid-sentence or mid-paragraph.  This was a bit of a pain, but I added a loop that counted up to 1200 characters to be displayed, and then when it reached the limit trimmed the extract to the nearest full paragraph.

I did this instead of just displaying the first or first and second paragraphs because paragraph length on Wikipedia is super inconsistent.  I didn’t want to just display the first paragraph since some women had a one sentence first paragraph and I wanted to capture more content from those cases.

// extract is the string from Wikipedia of the summary section. 
// It includes html tags.
if (extract.length > 1200) { 
  extractArray = extract.split(/(]*>)/);
  extract = "";
  var chars = 0;
  var stop = 0;
  // display up to 1200 chars of content - disregard html tags
  extractArray.forEach(function(item) {
  chars = chars + item.length;
  if (item == "" || item[0] == "<") {
    extract = extract + item;
    chars = chars - item.length;
  } else if (chars <= 1200 && stop == 0) {
    extract = extract + item;
    chars = chars + item.length;
  } else {
    stop = extractArray.indexOf(item);
    };
 });

Final thoughts & to dos

This was a really great learning experience.  I used jQuery, html, CSS Grids and lots of other new skills.  The whole thing is far from perfect, I wish I could detect faces and dynamically position the images so this wouldn’t happen:

Screen Shot 2017-04-20 at 4.53.39 PM.png

Since most of the images are portrait, when the page scales down to a single column and the image area is landscape, photos can get cut off in weird places.

I also didn’t make the local file option responsive.

I would like a dynamic way to have the list of women.  That way I wouldn’t need to update the files in the chrome store if I want to add or change any names.

I have many other “I wish” and “Next time”‘s, but for now, that’s pretty much it!

Here is the complete code for my chrome extension!  You can install the extension from the chrome store here.

Thanks for reading!

Other random things I learned:

  • If you don’t specify the data type of the result you want from your .get request, jQuery will try to guess it and it might not be right.  I was banging my head against the wall trying to figure out why I was getting back a string and then I read this:
    • The type of data that you’re expecting back from the server. If none is specified, jQuery will try to infer it based on the MIME type of the response (an XML MIME type will yield XML, in 1.4 JSON will yield a JavaScript object, in 1.4 script will execute the script, and anything else will be returned as a string). source
  • I tried using the wikiblurb library since CS50 is all about standing on the shoulders of others and using good libraries.  I really liked that they had the links in the Wikipedia blurbs they got back, but I wanted more customization and it seemed like a bigger pain to customize around their library than to not use it.
  • I looked at stars.chromeexperiments.com to try to see how they used wikipedia extracts.  However, the blurb wasn’t matching up to what I was seeing in wikipedia, so I’m guessing that the blurb wasn’t getting pulled dynamically.  When I dug up the code (can’t seem to get to it again!) it looked like the blurb data was being held in a file somewhere as opposed to coming from a json request to the wikipedia API.  See screenshot below of the difference.

Screen Shot 2017-04-17 at 5.40.45 PM.png

  • Object-fit was key to replicating Jen Simmons’ image behavior (filling it’s portion of the screen).
  • Some useful documentation of named grid areas.  This was very helpful in using CSS grids to do my layout.

Coding Part II: Radfems

 

Hi there!  Read Part I of my coding journey here and get the chrome extension here!

Part of what reignited my desire to code was a project I did in December 2016 called He to She.  It was a simple chrome extension that changed instances of masculine pronouns (he, his, him) to feminine ones (she, hers, her).  It was pretty janky (code is here) and was largely written with the help of tutorials.

But it was SO much fun to write!

And I wanted to do more.

For my next project, I wanted to create a chrome extension that brought up a new woman in STEM with each new tab.  There are plenty of new tab extensions out there and I figured it should be within my realm of understanding.

Nope!

It was way beyond my capabilities.  So I headed back to the drawing board with CS50.  This time, I was determined to finish.  I had projects to create!

Granted, it wasn’t a straight shot.  I got distracted and built a shelf and a robot, but I got through the course in the end.

So there I was with a blank sublime text page, an idea for a project, and no idea how to start.

Sketching out the plan

I began by sketching out the user experience of my project:

  1. User opens a new tab.  A page with a random woman to know in STEM appears.  There is a picture, her name, and a short bio pulled from Wikipedia.  At the bottom of the page, there is a link to read more on Wikipedia.

Then, I broke down the actions I would need to figure out:

  1. Get the name of a woman in STEM
    1. I decided to put a pre-determined list of names in a txt file since I couldn’t find a good RSS feed or other such list of women in STEM
  2. Get info on the woman from Wikipedia
  3. Get picture of the woman from Wikipedia
  4. Add Wikipedia information to the new tab page
  5. Add some sort of backup option if the user is offline or Wikipedia doesn’t work
  6. Format the page so it looks pretty!

Using the Wikipedia API

I decided to tackle the info from Wikipedia first.  This was, in my mind, the meat of the program and so it was important to get it working.

To do this, I knew I would need to get a JSON object from Wikipedia.  JSON (JavaScript Object Notation) is basically an easily readable bundle of information from a website.

To get a JSON object, you need to request it.  One way to do this is to use JQuery to request a JSON object from an API.

Side note: We had used JQuery in CS50, so I understood the basic concepts, but I hadn’t sat down and read the documentation.  Like I said in Part I, I like getting context then going back for the foundation.  For me, it’s much easier to write a program using JQuery then go back and read the documentation than to read the documentation top to bottom and sit down to write a program.

CS50 is set up in to facilitate that style of learning.  They give you serious training wheels so you don’t really need to understand what you’re writing.  It’s up to you to follow up and read documentation to truly understand what you’re doing.

Back to my project: I started looking into whether Wikipedia had an API and found this page.  I also found this page about parsing JSON responses from JQuery and this one about Wikipedia extracts.  I used the URL from the Wikipedia extract page and some of the code from the parsing JSON page to put together a first working version of my javascript code.  A screenshot of my first successful grab of Wikipedia info below!

Screen Shot 2017-04-14 at 9.59.41 AM.png

link to v1 js code on Github

Seeing the words appear on the page was IMMENSELY satisfying, but I was a long way from a launch ready extension.

Making my JSON request dynamic

My next step to make my JSON request dynamic.  In v1, I had hard-coded in a single URL that retrieved the summary info for the wikipedia page for “Weather.”  I looked around for how I might build the wikipedia request and had an a-ha moment after re-reading the JQuery documentation on .getJSON.

I realized that “Data that is sent to the server is appended to the URL as a query string.”  This meant I could pass a number of parameters into “data” as an object, and the output would be url.com?A=B&X=Y for the object {A: B, X: Y}.

So I looked again at my wikipedia URL and translated the query string into a data object like so:

https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=Weather&redirects=1
{ "format" : "json",
  "action" : "query"
  // etc etc etc
}

You’ll notice that the empty parameters like exintro=& are linked to empty strings in my data object like this –> {“exintro” : “”}

I also (later on) read on the Wikipedia API documentation page that you can (and should) pass multiple variables to the same parameter using the pipe symbol “|”.

For example, prop=extracts&prop=pageimages became:

{ "prop" : "extracts|pageimages" }

My final query looked like this:

 var wikiAPI = "https://en.wikipedia.org/w/api.php?";
 var test = $.getJSON( wikiAPI, {
 format: "json",
 action: "query",
 prop: "extracts|pageimages",
 exintro: "",        // this is to get just the intro paragraph
 titles: fem,        // variable with name of the random woman to lookup
 piprop: "original", // this is to get the image
 redirects: "true"   // this returns the final page after any redirects

Loading in my ladies

Now that that was done I had to figure out how to load in a list of ladies, and retrieve a random lady to look up.  In CS50, we had used separate text files to hold long lists so I decided that’s what I would do.  I created a .txt file and manually typed in a list of women in STEM.  (The list is basically a mashup of various buzzfeed/wikipedia/other lists, with some women who’s bios are super short removed, so suggestions/edits welcome!).

To do this, I would need to use a GET request.  This is exactly what I had done with the Wikipedia request, so this would be easy!  Also, since my file was local and I didn’t need to send any additional parameters, my GET request was VERY straightforward.  Just $.get(filename).  See below for my code:

 $.get("radfems.txt")
   .done(function(femlist) {
   var femlist = femlist.split(/[\n,]+/);
   console.log("getFemList success");
   randomFem(femlist, callback);
   })
   .fail(function() {
   console.log("getFemList error");
   })
   .always(function() {
   console.log("getFemList complete");
   })

You’ll see there are no additional parameters passed to my GET request outside of the filename.

The next piece was to parse the returned object.  Basically, I got back a big string, and I wanted to split it into an array of names.  Unfortunately, I had formatted my text file so that each woman was separated by both a comma and a newline.  I didn’t want to go back and reformat, so I decided to use a regular expression (regex) to match the comma+newline pattern.

Using an online regex tester, I tested out my regex and got back an array of my women’s names.  Success!

Retrieving a name

Now that I had my list of women loaded in, I needed a way to get a random name to then lookup in Wikipedia.  This part was very straightforward.  I used the built in Math.random() utility to generate a random number, and then used that number as the index to get a woman from my array.

var random = Math.floor((Math.random() * femlist.length));
 var fem = femlist[random];
 while (fem == "") {
 fem = femlist[random];
 };
 getFemInfo(fem, callback);

Local/backup image

The last piece of the puzzle was having some sort of backup image or text for when the user was offline.  I googled around and found this women in STEM bio series jewelry designer aubergdesigns had created.  I pulled a few of these images to use as my backups.

I put these images into their own “imgs” folder, and then linked to a random image in that local folder whenever the Wikipedia request failed.

 var test = $.getJSON( wikiAPI, {
    .... // this part is shown above...trying to keep it concise!
 })

 .done(function(test) {
   .... // this part is really long, see github for code
.fail(function() {
 console.log("getFemInfo error");
 getLocalFem(callback);
 })

// LOCAL FILE RETRIEVAL FUNCTION
var getLocalFem = function(callback) {
 console.log("getLocalFem");
 var random = Math.floor((Math.random() * 3));

 var img = "/imgs/" + random + ".jpg";
 callback("", "", img, "");
}

Callback functions

Up until this point, I had been using a global object container (more here) to store a number of variables – such as the name of the woman I was looking up and the Wikipedia page ID – that I wanted to access from several functions.

However, I had read that it’s generally discouraged to use global variables.  I think using a global object container is less discouraged, but still I wanted to see if there was another way.

Much googling led me to the concept of callback functions.  I wanted certain functions not to execute until the data they needed was ready, so I thought it made sense to use callback functions.  Essentially, a callback function is a function that is passed to another function as an argument.  The callback function is then called/executed WITHIN the other function.

I like examples so I googled around some more and found this code for displaying a quote in each new tab which helped me to understand how callbacks are used in the wild.  (I often find it helpful to look at longer pieces of code in Github, as examples in documentation are often really short snippets and I find it helpful to see more context.)

I then structured my code similarly to the code from csoni111.

  1. updateFem: I called a main updateFem() function once my document (the new tab) loaded.  That function contained the code to actually place my content on the page.
  2. getFemList: In order to be able to place content, I needed to get content.  I passed the function for placing content (my callback) to the next function which loaded the list of fems from my text file.
  3. randomFem: Once the list was loaded, I passed my callback (function to place content) onto the next function – the one that got a random name.
  4. getFemInfo: After I had a random name, I passed my callback over to the next function to get information from Wikipedia.
    1. callback: If I was able to successfully retrieve info from Wikipedia, I passed the info to the callback function to execute.
    2. getLocalFem –> callback: If I was unsuccessful (e.g. if I was offline when I opened a new tab), I retrieved a backup image from a local directory and passed this to the callback function to execute.

The callback was the very last function I wanted to execute.  I thought of it as an empty box with a bunch of gears inside that was getting handed from function to function.  Once all the other functions had executed, the final function handed all the necessary parts to my callback function and told the callback function to execute.

So that was the info retrieval side of things.  Read on to Part III to learn about how I displayed my content!