(THIS PAGE IS A WORKING DRAFT – Still need to compile the reference links and add some links to useful additional info. Email me at email@example.com with any suggestions.)
Here we begin our journey into learning how to write code and develop computer software with an overview of some concepts that I wish I knew early on in my own learning path. This page is meant to take virtually anyone and fill them in on some introductory concepts of programming, provided they have a high school graduate’s reading level and a basic level of knowledge and abilities on a modern computer.
Most of the textbooks and online tutorials for programming that I’ve seen waste no time diving right into learning the rules of code and guiding the learner through writing their own code. I will assume that if that’s what you want, you’ll skim right through this page or skip it entirely. That’s okay! I’ve encountered quite a few very seasoned software developers with many years of work experience that may have never considered some of the concepts described on this page. Yet, they’ve gotten by just fine – so I’m sure you can too… but I think you’d be missing out.
Many of the concepts that are described on this page can be picked up implicitly throughout the next few years of your career, but there’s no guarantee of that. You may get glimpses or hints at some of this information over time, but I feel as though having them explained explicitly upfront all at once may help you quite a bit with wrapping your mind around programming as a whole. It’s my hope that this will give you a more full picture of the wide array of jobs and industries that are involved with software, as well as give excessive insight into what the process of developing software actually is – there are the things they don’t always tell you when you’re starting out.
For instance, at this time traditional Computer Science college classes will often avoid or completely ignore some of this information. Classes you may take there won’t often provide more formal descriptions of the tools and processes that a software development team uses to coordinate their work, test and check it for quality, and release it to the users. While they might require you to take a philosophy class on ‘Logic’ before getting deep into your Computer Science degree, there isn’t always a bridge given that covers the gap of implementing that learning to Computer Science. That may be because many of these insights are considered beyond the scope of a Bachelor’s degree, or maybe the practices and procedures are too dynamic and changing to be relied on for building a class lesson plan for them… or maybe the CS degree programs just haven’t been updated recently enough to accommodate. Often it seems as though the colleges you may attend for Computer Science depend on your future employers and your own self-study efforts to fill in those gaps.
Again, you should feel free to skim or skip through this page at your leisure. You will come to know all of these things eventually, but perhaps getting to know these concepts intimately now will give you an advantage. It’s my goal to plot you on an optimum course for success and understanding into becoming the absolute best programmer you can be.
Programming is Used Everywhere
Over the last few decades, computer programming has quickly become one of the most powerful skill sets available in the job market as a whole, the world over. Computer programmers create the software that the modern machines of today require to do what they do. They build the tools and help form the procedures of modern-day business, in all aspects of the enterprise. In fact, there are countless applied uses of programming in nearly every sector of the modern workforce. Regardless of what type of job you’d like to do, I promise you that there is a computer program that can assist you with the task, or your employer requires you to use.
Among the more obvious uses of computer programming is its more traditional role in running office productivity applications in fields like Accounting, Business Management, Marketing, Law/Government, and Human Services. These have all gone from simple calculators and spreadsheets to gigantic enterprise platforms that store massive sets of data and coordinate huge projects for all the companies and governments of the modern world.
It’s in front of all of us every day as we go through our routines. So much so that many people totally take it for granted and hardly even notice the work of a computer programmer. For instance, during my short time working at a grocery store I had to utilize different types of computer programs to get the job, clock in and out of work and during the course of my workday. There was a computer program that ran the cash registers in the check-out line, one that played the music in the store, and one that coordinated the intercom system.
Less obvious are the ways programming has expanded to uses in more blue-collar sectors like in Manufacturing to automate assembly lines and test products, in Agriculture to coordinate farming equipment and process food products, in Transportation and Distribution Logistics to move us and all our products from point A to point B, and in Academic Research for concentrations in the sciences, engineering, and mathematics that help bring to us all the ground-breaking advancements which continue the progress of society moving forward.
That’s just the tip of the iceberg! Under each computer program that we commonly see are more ‘lower-level’ programs to help them do what they do, like Application Frameworks, and Software Platforms. Under those ‘lower-level’ programs lurks the programs that interact with the machine hardware itself like Operating Systems, and so on. There are programs that coordinate internet traffic, run web servers, control our smart-phones, and enable the modern smart-assistants like Amazon’s Alexa or Microsoft’s Cortana to understand you when you speak. They are all here for us, helping us and entertaining us in this new digital revolution behind the scenes, almost completely out of sight and out of mind.
Without a doubt, there is a place in the software industry for you whether it’s obvious right now or not. For the longest time I assumed that a higher aptitude in math was required to become a software developer, but in that I was sorely mistaken. Yes, it’s true that if one were to write the code that would run something like an Aircraft’s auto-pilot, or design code for a system that operates a satellite, or a system that runs in a graphing calculator… math would surely be required. However, software is used in many non-Math intensive areas as well.
For instance, an easy example of code that doesn’t depend on math skills runs on this website that you’re reading now. This site, https://wforbes.net/, runs on WordPress, written in the PHP scripting language. It’s a Content Management System (commonly abbreviated ‘CMS’), which gives people like me the tools to quickly set up a website and host content like articles and media on it, all without having to touch a line of code. Aside from some simple math to determine post times or something trivial the sorting of a list on the page – there is very little math needed to make this happen behind the scenes. Yet, 35% of ALL websites currently use WordPress! ( https://kinsta.com/wordpress-market-share/ )
Even if you don’t want to specialize in writing code for your day job, there are thousands of jobs available for you if you have at least a basic understanding of programming. These are the support roles in the technology industry that work with software developers to fine-tune, release, market, maintain, and coordinate code projects. Beyond that, I hope that by the time you finish studying this page you’ll be able to appreciate that learning to code and staying in practice with writing code can help build your logical and analytical skills so that even if you don’t work in the technology industry at all: by learning to code, you can gain an enriched set of understandings and thinking strategies which can be brought to any industry and situation in life.
Learn to Code, Improve Your Life
During 2018 over 1.3 million people were employed as Software Developers, with each of them earning a median income of $105k per year ($51 per hour)! [#] Given the trend of this industry’s double-digit percentage workforce increases, this year and many years to come, there will be an incredible need for new Software Developers. In fact, in the US there were an estimated 700,000 unfilled programming-related jobs in 2019.[#] This figure includes the many hundreds of thousands, if not millions, of people who work in the supporting job functions that surround software development, too. Some of these are Technical Support Engineers, Project Managers, Configuration Management Specialists, and a wide variety of different technical specialties. Also, the many fields which utilize software development related skills but focus on building technology beyond software like Computer Hardware Engineers, Computer Research Scientists, System Administrators, and Network Technicians.[#] All of these positions need to have varying levels of skill in writing computer programming code, but Software Developers and Computer Programmers need to be intimately acquainted with how to write elegant code that’s easy to read and optimized to work well above all else.
Aside from considerations on what sorts of jobs you can get and how much money you can make learning how to write code, it can be a very fun and rewarding hobby too! As the internet becomes a larger and larger part of our everyday lives and we gain access to different types of devices, there are more fun ways to use code to do useful things. From creating artwork and editing music with custom apps, building tools that make life generally easier, or simply putting together an application for just about any project you can imagine – coding and being aware of how code works can enrich just about anyone’s life. Although it may sound complicated and impressive to build and program your own automatic sprinkler system to water your garden, which intelligently considers soil conditions, weather, and even analyzes the plants health: you will be able to do something like that with just 5-6 months of concentration and study… and probably about $200 for the materials.[#][#][#] Your only limitation is your imagination and the hardware/sensors available for you to use.
The goal of the writing that follows is to bring you up to speed with the basics of what programming is and get a general overview of how programmers do what they do. Regardless of what job you do, or what hobbies you keep, knowing about this is very useful. Programming requires a special type of thinking which can really improve your problem-solving abilities and logical cognition. Being able to break a large problem that seems impossible down into smaller and smaller pieces that are easier to handle is not just a fundamental part of programming, but an important skill for anyone to have. This can be done for almost any practical problem or situation in your day-to-day life, which can lead to better decision making and planning. Improving your life by changing how you think, learning to program can help with this if you choose to use it this way.
The Basics of Code
Instructions in Source Code
Computer Programs are made up of instructions that are carried out by the computer, one instruction at a time. Every single computer program, application, app, software, and even video game you’ve ever used is made entirely of these instructions. That’s all that a computer program is, it’s a set of special text files with special text in them, this is the code.
Source code is just special text saved into a special text file. When you write source code, you’re just writing instructions that you would like the computer to carry out. The computer reads the text files in a similar way to how we read them. They start at the beginning and read one character (i.e. letter, number or symbol) at a time from left to right, and one line at a time from top to bottom. Then the computer does exactly what you tell it to do in these instructions. It’s important to always remember that the computer will never do anything more or anything less than what we tell it to do in the source code.
Source code files are very similar to any text documents you’ve used before, there are just two major differences. First, the source code files have special file extensions, and second, the source code files need to be written using one of the many programming languages we have available to us which use a specific syntax to communicate instructions to the computer. We’ll go over the file extensions next.
File Extensions and File Types
File extensions are the letters after the dot in computer file names. For instance, you may have a text document named “GroceryList” and so the file name would be “GroceryList.txt”. That last part of the file name, the ‘.txt‘ is the file extension. When you open and save a document in a word processor (like Microsoft Word, OpenOffice Writer, WPS Office Writer, TextEditor, Wordpad, or Notepad), you’re creating a text document (or a text file). These documents have file extensions like “.txt”, “.doc”, “.docx”, “.rtf”, “.odt” or something like that. These extensions are abbreviations that indicate the file type, (or the file format) which tells the computer how it should interpret and open these files. When you try to open a “.docx” file, your computer may already know it should use Microsoft Word, or when you open a “.txt” file it may use Notepad.
You can find full exhaustive lists of these file extensions online, but normally a programmer will only really need to remember a handful of these when working on a given project or programming discipline.[#] Often, the tools that programmers use to create source code will automatically utilize and include the file extension to save them appropriately for the programmer, but it’s very important to be generally aware of the types of source code and other files you may work with and their associated file extensions.
Programming Languages And Their Syntax
Just as we all use languages to communicate with each other like English, Spanish, Mandarin, Farsi, Arabic and so many more; we use Programming Languages to communicate instructions to the computer in much the same way. Each line in this paragraph is formed into a sentence. The main goal of each sentence is to express one or a few individual pieces of information and contribute to the larger group of ideas that comprise this paragraph. Much the same, most programming languages are structured by statements (like sentences) which are all included in a block of code (a group of ideas), often known as a function or a method (like paragraph).
What makes Programming Languages unique from each other, just like the differences between other type of language – is its syntax. The ordering of terms, the structure of their contents, and the rules that define their proper format are a language’s Syntax. In English, our sentences all end with punctuation like periods or question-marks, and in many programming languages it’s required to end a statement with a symbol like a semi-colon ( ; ). Often English is written in a way that certain types of words are ordered before or after other ones to enforce grammar rules, and similarly, Programming Languages rely on a specific ordering of the parts, or expressions of each statement.
A large difference between our spoken and written human languages and a computer’s programming languages is that these rules of syntax are usually extremely strict. Errors in syntax made by the programmer when writing source code will almost always lead to the computer either being unable to create a computer program from it or running into trouble while attempting to run the program. Tracking down and fixing syntax errors is an incredibly large part of the work involved with programming. For new programmers, it’s virtually the only part! Since those who are new to programming may not be used to examining code very closely for problems, they may spend 5% of their time creating the code and the other 95% of their time going back through it for small errors and problems.
Unlike humans who can read a jumbled sentence with poor grammar, punctuation mistakes, and misspelled words here or there, yet still understand the general gist of what the writer intended; Computers are almost totally unable to determine the implied meanings to things. For example, if you leave out a semi-colon from the end of a statement (like forgetting to end your sentence with a period), the computer may think that it should read that statement AND the next statement together as one single statement. You didn’t tell the computer the statement was over with the semi-colon, so it just kept going! Other common examples might be things like misspelling the name of something or placing one expression before another on accident. The computer has no way to consider how it can find the implied meaning behind what you intended to code, so your program won’t work as you intended and probably won’t even be able to be started until you find the errors.
Luckily, programming languages each have something called a compiler which inspects the code you write before it’s turned into a computer program. The compiler will usually give you hints, using error messages, about where your mistakes are and sometimes – how you can fix them. This can make the process of writing code much easier! They will often show you the area of the code that has the issue (usually by the line number) and the type of error that you made. From there, if the solution isn’t obvious, your task may include using a search engine, a documentation website or a reference book to more fully understand how you can fix the error. We will talk about what compilers do, how to use them, and some great examples of where to find solutions later, but right now it’s important that you know you aren’t completely helpless when it comes to fixing these errors in the software code you write.
Now that we’ve covered some of the beginner concepts behind the source code of a computer programs, let’s explore some of the thinking that programmers use to create this code. Then we can start to dive into writing our own code in the next pages.
Thinking Like a Programmer
Building an Algorithm
An algorithm is a description of a process. It’s a step by step explanation of a procedure that takes you from the beginning to the end of doing something. Given that the computer will only ever do what our source code tells it to do and that creating the source code needs to be structured very exactly and carefully to avoid errors, learning to build algorithms just as carefully will make the entire process easier. Now, although it’s often used to describe something a computer does, algorithms can be extended to explain how just about anything is done. In fact, I will go even further and assert that anything which can be done, can be described by an algorithm. When you wake up in the morning the steps you take to get ready for school or work is an algorithm. When you carry out a task at work like installing some drywall or helping a customer, you instinctively follow an algorithm. So let’s use a simple example. Thinking through the steps it takes to make a sandwich is a common exercise to help understand this concept in plain terms:
- Gather the food materials
- Set up the counter-top work area
- Build the sandwich
- Put the extra materials away
There’s not much to it, right? We all know how to make a sandwich. Well, I assume most of us do. However, if you’re from another culture that doesn’t commonly enjoy this highly refined dish or you’re a young child learning to make a sandwich for the first time… you may need more explanation. Which food materials are we referring to? How do you ‘build’ it?
Quickly, we see that the scope of this algorithm needs to be expanded beyond just four simple steps. Scope is the extent of the area or subject matter that something deals with or to which it is relevant.[#] Just like in other disciplines, the concept of scope is an important one in many facets of programming. If the scope of our algorithm is too narrow and doesn’t include enough details to complete the task correctly, we’ll go hungry without a proper sandwich. Equally, if the scope is too broad and includes too many unnecessary details, we may starve before reading through the whole thing… or get too bored with its tedium and just order pizza instead! So let’s expand each
- Gather food materials
- Remove bread from the shelf in the pantry
- Remove lettuce and tomato from the vegetable drawer
- Remove sandwich spread from the door shelf
- Remove deli meat from the meat drawer
- Set up countertop
- Place materials from the pantry and fridge off to the side
- Remove the knife from the utensil drawer
- Remove the plate from the shelf
- Place plate and knife on the counter
- Build sandwich
- Remove two slices of bread from package and place them on the plate
- Remove the lid from the jar and apply spread evenly to each piece of bread using the knife
- Remove deli meat from package and arrange four slices onto each slice of bread
- Flip one of the pieces of bread over and place the face of each side together
- Put materials away
- Return the package of bread to the pantry
- Return the leftover lettuce, tomato, and deli meat to their places in the refrigerator.
- Wash off the knife and return it to the utensil drawer
- Wash off the countertop and dispose of the crumbs
With our expanded scope to fit the needs of someone who may not know what sandwiches are or aren’t familiar with the fine-art of American kitchen work, more details arise. We see what sandwiches are made out of, and we see how to stack those materials together. Of course, we didn’t specifically explain that you should wash your hands first, we didn’t describe how to apply the spread to the bread, and we didn’t elaborate on the best way to arrange the slices of meat onto the bread. Further additions could be made to our algorithm, then even more information about those additions may arise, and so on… but at a certain point, we have to assume that the person making the sandwich can complete the task without explicitly describing each minute detail.
Now imagine that we’re tasked with writing a program that a robot would use to make a sandwich in your kitchen. That scope would need to be much wider. There are many things that we, as humans, do without even giving a second thought but a robot would need to know a great deal about it. Let’s take the first step as an example. “Remove bread from the shelf in the pantry”. In more granular detail that would mean walking over to the pantry, opening to the door, locating the package of bread, removing it from the shelf, closing the pantry, carrying it to the counter-top, and placing it adjacent to the workarea. These are all things that a robot has no idea how to do until we describe them. Where is the pantry? How does the robot walk to it? What does it take for the robot to open the door? Locate the bread? Pick the package of bread up without squishing it? Each of these new details leads to needing even more details to accomplish what, to us, is a very simple task.
It may be starting to be clear that how we think about our actions and the things we do every day only includes a small portion of all the minuscule details required to actually do them. Thankfully, we don’t need to calculate every step we take while we walk to ensure that we don’t fall over. To open a door we don’t need to judge the angle by which our arm needs to bend, the balance we need to achieve to reach out to the doorknob, the correct amount of pressure we need to apply with our hands to it, or any of the complexity of the twist we need to give the doorknob in order to open it. We just do these things automatically without any hesitation, because once we learned how to open doors it was all packed away into our brains for us to reuse whenever we needed it. These algorithms come naturally to us and using them happens so quickly that we combine all these details into one motion. In order to effectively be able to program a machine to do useful things for us, we need to learn how to think differently.
Although there is some disagreement on the exact definition of Computational Thinking – the most simple way to describe it might be just how it sounds: “The act of thinking like a computer”. It’s the process of looking at the world and translating it into a form that a computer can most easily use to efficiently complete a task or solve a problem. Computers compute things, that is to say, they make computations. They take input information, process it, and compute a result. For programmers, the main use of Computational Thinking is to effectively design algorithms and troubleshoot issues with those algorithms. In the previous example, we started to discover that if we wanted to make a sandwich building algorithm and use it to program a robot to build us a sandwich – we’d have to be incredibly specific, detailed, and exact. We’d have to think more deeply about everything having to do with the process, and for most of us, that means we’d probably need some new thinking methods to accomplish that. This is where some of the core components of Computational Thinking can come in handy. Let’s peel back the layers of Computational Thinking to find what makes it useful in programming and how it can generally improve our ability to solve problems and rationalize situations.
Using Abstraction and Pattern Recognition
One of the main benefits of using a computer to do things is to make a process repeatable for different variations of the same type of task. Although there are countless examples we could use, let’s look at one that all computer users are probably familiar with. Opening a text document in a word processor. If you open 20 different text documents that all contain different text or media content, you generally expect the Microsoft Word application to be able to open them all in the same predictable way. You double click the file icon, there’s a loading screen while the application starts up, then you see the interface of the application with the contents of your text file. You expect to be able to edit all text files like this with the same set of tools you’re used to – like adding text in different fonts, sizes, and colors to the document using the word processors controls and then changing it with various keyboard commands. You expect to be able to save your changes to any of these documents, close the program and then reopen the same file later to see the same changes you made. Microsoft Word and word processors like it are able to do these things to all sorts of different text documents thanks to a concept known as Abstraction.
Abstraction is the act of creating a generic conceptual model of a type of item by looking at a set of similar items and picking out what they have in common. By learning how to remove more specific individual attributes and focus attention on details of more general shared significance, you can create more useful code that can accomplish more broadly useful tasks. Other terms for Abstraction could be ‘Generalization‘ or ‘Modeling‘.
To display a text document, a word processor program is coded to look at more general details like the file type rather than looking at the specific individual content of what’s in the text document. This way it can open a file with a ‘.txt’, ‘.rtf, or ‘.docx’ extensions, regardless of what the file actually contains. This makes the process repeatable for any variation of text files you’ve created or downloaded. When editing the text document, the word processor program only knows the Abstract concepts of what making text bold or italic is, it doesn’t need to be coded for any specific text. The process of making the word “Potato” bold uses the same functions as making the world “Chocolate” bold. When it saves the file too, it has been programmed with the Abstract concept of taking whatever contents are in the document and saving it to the computer’s file storage as a text file – it doesn’t need to be coded to know what exactly is in the text documents… just that they are compatible text file types. The process of saving your essay for English class uses the same functions as saving your grocery list!
Creating an Abstraction, or a Model, of an object or a process, is heavily dependent on being able to define similarities between objects of the same type and recognize patterns in processes. Let’s look at an example of producing a useful object we all know and love, coffee mugs. Most coffee mugs have a handle, designs or colors on the outside of them, a weight that can be measured, a measure of spatial volume inside of them, and depending on the material they’re made from they may be safe to heat up in a microwave or dishwasher. So an abstract model of a coffee cup could look like this:
- Handle Shape
- Container Volume
- Design Description
- Exterior Color
You’ll notice that this shortlist of attributes can be used to help define virtually any coffee mug. So, this would be very useful to be included in the code for a website that sells coffee mugs, or in the programming of a machine that makes coffee mugs. The scope of these two applications are very different though, and I suspect that the machine which makes the mugs would require much more information than this small Abstraction provides – but it’s a good start at defining and categorizing coffee mug data!
Processes can be modeled in the same way too, and we’ve already begun to cover this by talking about Algorithms. Though, an abstract algorithm for building a sandwich would provide fewer specific details than in our previous example. For instance what if the sandwich should be made without meat? What if the food ingredients are stored in different places than a pantry, shelf, or refrigerator? What if it’s preferable to spread with a spoon rather than a knife due to the consistency of the condiment? Or perhaps we need not include spread at all in the ingredients, yet include other ingredients? An abstract algorithm would provide for flexibility in these regards and probably read much more like our very first example:
- Gather food materials
- Set up workspace
- Build sandwich
- Put materials away
- Serve sandwich
This abstract process of making a sandwich leaves a lot of necessary details out, yet if you were writing a program where the user would define those details – for a more customized sandwich experience – then adding this Abstract process into our code can help keep our code organized and expandable. We can write code for multiple ways to gather the materials or ways to build the sandwich. Then depending on how it’s used, and by whom, our program or robot can make use of this more generic algorithm, filling in the details depending on conditions and integrating this Abstract process into a custom, automated sandwich building experience.
Patterns are behind these and all other real-world applications of abstraction. Among all the coffee mugs you may have used or seen throughout your life, you’ve naturally picked up on the common pattern of their attributes: how they’re shaped, what materials they’re usually made from, their relative size, and weight. In a basic way, recognizing these patterns are what allows you to immediately identify a coffee mug when you see one, and they contribute to why you might think an oversized coffee mug is funny or odd, or a coffee mug with an improved design is useful and desirable. We naturally recognize patterns all the time, but refining this ability to pick up more nuanced or non-obvious patterns is incredibly useful when developing software and programming a computer.
Recognizing non-obvious patterns and finding useful similarities in objects, entities, and situations doesn’t usually come naturally for most people – including myself. At times it takes a careful systematic approach, which is where these core concepts and first steps of Computational Thinking come in very handy. Next, we’ll explore these concepts in some basic detail and then finally demonstrate how Computational Thinking can be used to solve a problem with programming.
Decomposition and Structured Analysis
When you were growing up do you remember those kids that were always taking things apart to see how they worked? Maybe you were one of them or some of the friends you had were. It can be common for the more curious kids and young adults to disassemble their toys or electronic devices to see what was inside or hypothesize what was in there that made them operate the way they did. Nowadays I’m sure that knowledge is just a tap and a swipe away on their smart-phone, but I grew up before the internet really began, without Google to simply link you to a diagram or a mechanism description. Either way, I’ve definitely seen kids take apart their bicycles to mess with them or replace/upgrade parts on them, only to grow up to become adults that work on their cars the same way. Whether you were born with an interest and aptitude for it, or you’ve only dabbled in the activity occasionally – this sort of deconstruction and investigation is a crucial part of ‘Thinking Like A Programmer’!
In Computer Science, Decomposition is the process of breaking a complicated or complex problem into smaller, more manageable parts that are easier to understand and work with. It’s often the first and most important step of making just about any sort of computer program with code, and it’s arguably one of the most universally useful skills you can pick up while learning how to develop software. From creating relatively simple programs that just display information on our computer and smart-phone screens, all the way up to huge world-changing applications like social media websites or computer operating systems, the ability to decompose a whole complete task into smaller pieces is an incredibly important skill. This isn’t limited to just coding and programming tasks either. Just about everything in our lives can be made easier to understand or work with when they’re broken down in a useful way. I believe that everything we do can be made better – or we can become better at doing them – when they’re turned into step-by-step, piece-by-piece, descriptions, and understandings.
Although we, as humans, naturally recognize many common patterns and we might be able to conjure up basic abstractions easily we are often restricted from naturally breaking large concepts down into smaller ones. This makes sense when we consider how and why our brains have developed and evolved throughout the millennia. We can recognize patterns fairly easily because we’ve been adapted to be able to hunt and forage in order to survive and meet our natural needs. In seeing that certain animals congregate in specific areas or migrate through predictable paths, or in inspecting plants to decern which should be avoided because they’re generally poisonous or which are delicious, should be sought out, and even how to seek them out; we’ve become amazing recognizers of patterns and builders of abstract models. In doing so we’ve adapted the ability to condense many pieces, parts, clues, queues, and ideas into a single individual coalesced symbol that is representative of all of the minutiae that it’s composed of. Thus, working backward and reversing that process to decompose any single object or big idea into its parts can be incredibly difficult and counter-intuitive at first.
In our typical average day, we’re benefited greatly from being able to look at a car and instantly see it simply as a car, then go about our business. It’s shaped like what we know to be an automobile, it has 4 wheels, it doesn’t have a truck-bed, and it appears to seat 2-4 people comfortably … so it’s a car. That process happens in our heads without us trying, simple, done. Yet, we might fail to immediately classify this car by it’s make and model until we look closer and think about its specific styling or shape. Even then we might not see right away that it has modifications to it on its exterior or hear in its rumble that there are modifications in the engine which distinguish it from others of its type. Further, we would never be able to quickly see and process the brand of each individual part on the car. We could hardly hope to be able to count just how many parts are in the car, or know where each of them was manufactured. You’d have virtually zero chance at being able to tell the material that each part is made from, the metals, plastics, composites, lubricants, and fibers. The truth is, when you see a car you are actually seeing approximately 30,000 parts, pieces, screws, fasteners, and separate components. That’s just too much for our brains to process. We look at a car and immediately see just 1 object because anything more than that requires a bit of extra brainpower that we might not be able or need to spare.
So if I were to ask you to create a Social Media website for me, it might sound impossible – even if you already knew how to write code and create simpler web sites. I’m sure I wouldn’t be able to do it. There’s so much that goes into a large platform like Facebook or LinkedIn that attempting to create your own version of them is a daunting proposition. Where do you even start? Just decomposing the concept of “Social Media Website” into smaller actionable tasks that are possible for one person to do feels like a mountain of work on its own. But how can that be? You use websites like these all the time. If you’re like the average American, you might spend over 2 hours per day on them! [#] You’re telling me you can’t just easily start building your own?!
Something that really comes in handy with decomposing large ideas into many smaller ones and organizing them into an understandable arrangement is something known as Structured Analysis. This is a more formal method for analyzing things with a structured approach so that they can be organized logically and assembled into a computer program that can perform a complex task or set of tasks. The major parts of Structured Analysis include strategies for dividing the topic of focus into a hierarchy of lists of smaller and smaller scope, creating diagrams of the items on these lists to really see where and how the information flows, and finally designing a cohesive structure that accurately reflects these lists/diagrams. Commonly the strategies of Structured Analysis involve taking the viewpoint of the information or data involved and paying attention to what actions or operations need to be done to change the data we have into the data we need.
Sure that might sound like a bunch of high-power ‘Type A’ mumbo-jumbo that business people in suits and ties use in board room meetings, but in a more basic scaled-down capacity it’s incredibly useful even in smaller projects. On one hand, there are definitely very formal standardized ways to go about Structured Analysis and Decomposition to produce enterprise-level software like gigantic globally reaching Social Media websites, but on the other hand, there are less formal more free-form ways individuals can go about Decomposing and Analyzing things which we should consider here that are uniquely tied to writing any level of software and translate to the rest of our lives, too.
Computer programs and applications are usually meant to solve problems, big and small. Lord knows we have plenty of problems in our lives and in all levels of society that need solving. So it’s important to know that as you progress into learning how to program, it should always be in the back of your mind that your mission is to help solve these problems. Whether it’s simply just to solve the problem of everyday boredom with a fun video game, solve the problems of a struggling business with an app that enables better coordination to make more money, or even solve a wider far-reaching societal problem like homelessness or mental health – it all starts with very small, elegant solutions to more simple problems. Often these solutions involve working with minor pieces of data, like numbers or small strings of text. That’s why many, if not all, beginner programming tasks revolve around knowing how to accurately work with these minor pieces of data, as you will soon see or have already noticed in your studies.
Three Simple Steps
When someone comes to me with an idea for a new app, some work on their website, or virtually any general question in software or programming; and when I sit down to write code or work on a project or plan out a project I’m about to work on, I have three simple methods for decomposing the situation. They involve Input, Processing, and Output.
Step 1 – What’s the INPUT?
First, I look at the basic starting data for the app I want to create. These are the inputs to start my own work process. For instance, with an idea for a new app I will usually have:
- (a) Who will be using the app?
- (b) What device and in what circumstance they will be using the app?
- (c) What will they expect the app to do or accomplish?
Knowing about who will be using the app puts everything else about what I will be working on in context. What the targeted users that will use your app are capable of or already understand should guide how the app functions and how it displays information to them. The style and format that the displayed information takes should really be catered to who your users are and what they expect.
Knowing about what devices they’ll use the app on and in what situations they’ll need to use the app informs a great deal about how the app should be developed. The device may dictate what programming language I use since some devices can only process certain languages. The circumstances in which they use the app may call for the app to show them sparse simple information quickly, or long detailed information slowly.
Finally, examining what the user expects the app to accomplish brings it all together. It’s the expected solution to their problem, and fully understanding how that solution should happen will really inform how I should approach the problem.
You may notice that this is also loosely structured into it’s own Input, Process, and Output. Keep reading and you may see that pattern continue to emerge.
Step 2 – PROCESSING
Next, I look at pulling apart these inputs into smaller more useful ideas. I can take the (a), (b), and (c) from Step 1 and squeeze as much detail out of them as is useful. Determining the details and the usefulness is usually best found with asking and answering questions for myself. For instance, let’s break down (a) “Who will be using the app?” That is, who is the typical ‘User’ of the app? What will the app need to know about them? Their Name? Their Email? Would it be useful to have them create a username handle? Will they need to log in to the app with a password?
Then to better understand this we can brainstorm thoughts on (b) and (c) from Step 1 in the context of who they users may be to fill in any blanks. What is this type of user going to expect for the app experience? A dashboard to organize their personal information? A profile to display their public information to other users? Will the users be communicating with each other? Will they be conducting other interactions? Like transactions? Sales of services? Sales of items? Will these users be sharing media with each other? Videos? Memes? How should these features operate given who the user is?
So, for example, although I may have started out with knowing only that the users are ‘Rock music lovers’, if we dive through asking and finding the answers for questions like these tons insights are found that better flesh-out who these Users really are. We might look at websites and apps they are used to a derive some of these answers, we could survey some and ask them their preferences and expectations, or do other research. Our users are actually Rock Music lovers who hate loading screens, don’t like remember passwords, find having a profile of their own very important, like to have some customization over the style their own profiles, and really like sharing their music interests with other users. Not only are they music lovers, but they’re Stylists and Advocates too!
When we do this for each part of our starting Input data in Step 1, we’re given a more complete picture. We can make lists and arrange things to make more sense of them. We can prioritize them between more or less important ideas. We can really begin to pull together a plan.
Step 3 – Forming the OUTPUT
Now I would consider the output that the application will give. The solution to the problem, comprised of many smaller solutions to many smaller problems. In considering and refining (b) and (c) we can look at each of the questions/answers from the previous points of Step 1. (a) informs (b), (b) informs (c). We will have varying answers depending on what type of app this is and who our users will be. These answers are the solutions. Each answer will point to individual points of data that will be necessary to work with. These pieces of data will require some code to gather from the user or other sources, store for them or process in some way, and display the result of the process to them or others. At the core of each piece of this step are many series of many small operations on the data. An example of a series of these small operations may be the exact details of how a user signs up for the app:
- The app will load a screen with text fields, one of them being the username text field
- The user will enter their desired username into a text field and click a button to submit it
- That username will be stored into a variable which sits in the device’s memory until milliseconds later when it gets scooped up by a function, and arranged into a User object.
- That User object is sent off to the app’s server on the “cloud”
- The server will run a process that makes sure the username conforms to standards like not being a bad word, not starting with a number, and not containing spaces
- The server will send the username along with other pieces of data for the User into the database
- If the User was saved successfully in the database, the server will reply to the app over the internet
- The app will get the reply, see it was a success and run a function to display a ‘success’ message
- The app will display the ‘success’ message and offer a button or link to move to the next screen or page in the app.
Each step 1-9 will require some code to be written, and each of these pieces of code can be put into functions. Each of these functions can be defined by a small step-by-step algorithm. Finally, we’ve boiled the large ambiguous task of “have the user create a username handle” into a list of small easy tasks that can be done in a short time. One of which might be to write the code that checks if the username that was entered by the User has spaces in it. Depending on the Programming Language that the app is being written with, this can be as simple as three lines of text (maybe even one line, but three lines would be easier for us programmers to easily read or make sense of later). One line to give the function a name for other parts of the code to know it by, and then few lines of code to take the text of the username and run a function or two on it and determine “True, this does not contain spaces”, or “False, this contains spaces we can’t use it”.
You might not immediately think that to solve some huge societal problem you would need a few lines of code to see if a piece of text has spaces in it, but you do. Along with other stuff, too – I would imagine.
At the core of this, I hope you can start to see that “Input, Processing, and Output” are not only the fundamental operations that computers do, but they are also the fundamental operations that can be used by YOU while working when them. Each stage can also be broken down into their “Input, Processing, and Output”. The Input, “I”, can also be broken down to “Input, Processing and Output”. The Processing, “P”, can involve “Input, Processing, and Output”. The Output, “O”, can ALSO involve “Input, Processing and Output”. Each of those lower I, P, and O can be also be thought of with I, P, and O. And so on, and so on until you reach something you can easily write code to do. Even your code will involve “Input, Processing, and Output”!
In fact, that’s the whole point of this approach. Each function you write in your code, each process that numerous functions will carry out, and each portion of your app made up of these processes can all be looked at as having an Input, Processing, and Output. Thus I’ve found that a strategy which works for me is decomposing everything into a three stage scheme – What am I starting with? What do I need to do to my starting info? What do I expect my results to be? The more you learn about programming and computers, the more you will see that all the way down to the even electrical signals on the circuit board, there will be these 3 stages: “Input, Processing, and Output”!
This is just one simple strategy of many hundreds, if not thousands, of different ways to organize your workflow and it’s important to cultivate your own. Find what works best for you. If making lists helps you, do that! If making flowcharts and diagrams help you, go for that! The beautiful thing is that you can use a combination of strategies depending on the problem you look to solve or the software you’re working on.
A Thought About Our Own Problems
When you spend days, weeks, months, and years looking at huge complex problems and solving them with small easy programming tasks through decomposition; you will begin to look at the rest of your life in a similar way. What used to be a long stressful jumbled day of errands, chores, shopping trips, awkward conversations, and long freeway commutes can be turned into an easy enjoyable simple series of problems solved with small actions and planned strategies.
Things that other people find stressful can be much more easily handled or even made to be fun by mentally breaking them down into bite-sized pieces and organized to be handled with planned ideas or even moment by moment improvised actions. Awkward conversations are easier when you are present, in the moment, with the situation sized up. Who are you talking to? What do they like? What have they reacted to pleasantly before? The situation has an Input, Process, and Output; each piece of the conversation in what you both say is an Input or Output, what you are doing when you are considering and feeling what the other person has said is your Processing.
In life, as is in programming, you can’t always choose what Input you get but you often you have some serious control over your Process and how you handle things. Be logical, think a little like a computer, make decisions, and take actions that best suit what Output would be ideal for all parties involved. The better you are at considering the details in as much of a decomposed manner as you can, the better you can handle … just about anything, really!
Summing It All Up
First, we explored in some more broad and general terms what Programming is used to do in various industries throughout the world. This eluded to the idea that you can probably use Programming to improve your productivity and career pursuits, in whatever industry you work with. Of course, given the current trends in technology, becoming a Programmer as your main career, or working in a related field, will offer a highly desired and well-compensated vocation – so we discussed that briefly. We explored the basics of what source code is, the types of files that source code is saved as, and some basics about the programming languages that the source code is made with. It’s my hope that this might provide anyone stumbling in off the street with a starting point for further studies into these topics.
Next, as the main focus of this page, I tried to lead you into the idea that all of Programming can be boiled down into “Input, Processing, and Output”. To get there, it was important to see why you should boil things down in the first place. In order to see why it’s important to be good at doing this and how it can be done, we explored ideas on how to Think Like A Programmer!
In describing ‘Thinking Like a Programmer’, I tried to give an example-driven description of Algorithm basics and the Computational Thinking that can be used to construct them – which is mainly comprised of 4 key parts. I described these parts in reverse order so that the reader can see the finished product first, the Algorithm: A Step-By-Step description of a process that takes a set of inputs and uses them to accomplish the program’s goals, getting the desired output.
Algorithms are guided by Abstraction because when you create a generic conceptual model of actions or items you can arrange them in a repeatable, flexible, and dependable way. Abstraction relies heavily on Pattern Recognition in discovering similar traits among different things or processes and identifying predictable frequency in repeating occurrences by looking at a set of similar items and picking out what they have in common.
Finally, we explored the idea that Decomposition is not always something that comes naturally, but can be the key to accomplishing very complex tasks because as these tasks are broken down into smaller and smaller pieces, these pieces increase the clarity of what needs to be done to find a solution. Although this can be accomplished in a standardized and involved way with Structured Analysis, using hierarchical lists, diagrams, charts, and graphs; it is often advantageous to talk through the problem considering the Input, Process and Output required. Then looking at the Input, Process, and Output of the Input, the Process, and the Output can continually provide a way to Decompose a problem until easily actionable tasks can be coded.
Using this or similar concepts in life is possible, and often leads to a more organized and enjoyable day. Planning, acting to a plan, and being guided by the result will improve your life; and you are able to become increasingly better at that through improving your skills with software development.
Thank you for reading this Conceptual Introduction to Programming! As I refine this page – fixing errors and better clarifying my descriptions – I very much value your own Inputs in the form of comments, questions and concerns to my email address: firstname.lastname@example.org
On the next page I will be discussing how to put these concepts to use by guiding you through a few examples with real code exercises you can do from your own desktop or laptop! (Coming soon…)
(work in progress)