i’m currently taking: Applied Plotting, Charting & Data Representation in Python and have been introduced to a “relevant” model.

as validated by my years of professional experience in ICT, communication is a major part. as technologist, we almost only always focus on the processing and analyses of information. i’m glad that Data Science “explicitly emphasises” the importance of also communication of results. most people just refer to it as IT (but that IMHO is an “antiquated” form of thinking}. not just because it was “recently” rebranded as ICT by some governments and agencies, but because it highlights the other part of the equation and is a much more holistic approach to technology.

for your reference, here’s the Visualization Whee/ by Alberto Cairo:


i also added it to my GitHub repository:


i’ve “shared” something a bit unusual as this Jupyter notebook is comprised of all “Markdown” (and no Python Code) cells as it mainly talks about the initial step required referred to as data “cleaning”. some “transformations” are warranted after importing datasets before working with them or performing Exploratory Data Analyses.

as it is mainly words it may be “ambiguous” ( as everything seems “obvious” to me ) to some. kindly let me know if there are things that aren’t “clear” or can be explained much better so i can post these. or if you know of supplementary (hyper)links or other resources “freely” available online, please let me know so i can make sure to include them.

my updated GitHub repository is at:


to “complete” “slicing” DataFrames, i discuss loc and iloc. i think this enough to cover the “basics” of Python. as you know, i will start trying to delve into statistics to a.) further my skills, and b.) see if i can be “useful” to my wife.

i was always planning to tackle “advanced” topics -it was just “accelerated” sooner rather later.

here’s something i “shared” so i can “move on” to statistics :


That said, i can consider revisiting “past” topics based on feedback.

i did a lot of coding in my time and was introduced to neural networks at school so it wasn’i really a stretch learning Python. i only knew aspects of statistics so it became obvious to me that it was something i had to strengthen to upgrade my data science skills because i had a lot of exposure to programming and a little background on artificial intelligence – let me preface it by saying, it’s been awhile since i’ve “actively” done both and technology has advanced, that said, i’ve been developing a GitHub repository because i believe the expression that says you teach best what you need to learn.

to brush on the basics and truly understand Descriptive Statistics i’m perusing version 2 of the ebook Think Stats: Exploratory Data Analysis by Allen B. Downey. it’s supposedly framed for programmers and better suited for them in learning statistics.

aside from personal growth, my wife (although she’s well versed in machine learning and teaching programming) and her work team are looking at doing some research that may require this. so there’s a greater incentive to study this.

“Slicing” (that is, creating subsets using indices) DataFrames can be quite useful in partitioning datasets. for those familiar with SQL, this kind of reminds me of the SELECT command that is sometimes paired with an optional WHERE clause.

i know this is a very “basic” treatment but i used to play a lot of basketball and i believe in the importance of fundamentals. i use a lot of this in my own code and from what i’ve seen on the internet this is very common in snippets shared so IMHO it’s important to grasp the “basics” of this – in other words, it’s important to understand this in trying to make sense of sample code (comments are another thing but don’t get me started on that “bugbear”…).

here my updated GitHub repository:


since i mainly use a Jupyter notebook for Python coding, i use the print() function a lot to help with “debugging”. Error “detection” has a lot to be desired (that’s one of my only complaints. i lean towards it being used to introduce programming).

here are a “few debugging tips” that would have handy to know in learning how to code in Python:


in “major” databases there is sometimes an ETL (Extract,Transform, Load) tool. as DataFrames are the “commonly” used data structure in Python for similar operations (and analysis), you can perform all three functions. That said, i prefer to only do the ‘E’ and ‘L’ as they are “simply” accomplished by built-in functions. The ‘T’ require me to use a for loop and read each row using a file handler, so it’s more “convenient” for me to manipulate the data once it’s imported.

it’s important to note that determining which dataset to use can involve unconscious/implicit bias. therefore in analysis (and offering insights), you need to consider the source: no matter the prevailing “wisdom”, one needs to distinguish between fact and opinion.

here is the updated GitHub repository: