HOW TO PREP TO SQL – SUB-QUERIES

In this installment of the How to Prep to SQL series I’m going to review how you would translate a Prep flow into SQL using a sub-query. A sub-query is a nested query within a SQL statement. All of the examples below use Tableau Prep version 2020.2.3 and MS SQL Server and utilize the Tableau Superstore data set.

In this example I want to return all of the orders from Superstore where the discount is greater than the average discount for all the orders.

There are two different ways I would do this in Prep.

  • Connect to the Superstore data set
  • Insert a clean step and keep only the order ID & discount fields
  • Add an aggregate step and calculate the average discount
  • Join the aggregate and the first clean step together where the discount from the clean step is greater than the average discount

The final flow will look like this

The second approach in Prep is to use a LOD (level of detail calculation) in the first step. The calculation is {FIXED : AVG([Discount])}. Then create another calculation to evaluate if the discount is greater than the average discount [Discount] > [Avg Discount] and then right click on True and select keep only.

The final flow will look like this:

To do this in SQL, first, I am going to write a select statement to return the order id & discount fields from my orders table in the Superstore database:

SELECT
O.[Order ID],
O.[Discount]
FROM Superstore.dbo.Orders$ O

To get the average discount I am going to use a sub-query within the select statement. The sub-query is a select statement that will calculate the average discount from the orders table

SELECT
O.[Order ID],
O.[Discount],
(SELECT AVG([Discount]) FROM Superstore.dbo.Orders$) AS [AVG DISCOUNT] –this is the sub-query
FROM Superstore.dbo.Orders$ O

To limit the results to just the orders that are over the average discount we need to add a where clause that limits the orders to just those where the discount was above the average discount. We’ll do this by saying the discount is greater than the sub-query that calculates the average discount. (in the first prep example this was done through the join. in the second prep example this was done by creating the boolean field and then right clicking on true and selecting keep only)

The query is:

SELECT
O.[Order ID],
O.[Discount],
(SELECT AVG([Discount]) FROM Superstore.dbo.Orders$) AS [AVG DISCOUNT]
FROM Superstore.dbo.Orders$ O
WHERE O.[Discount] > (SELECT AVG([Discount]) FROM Superstore.dbo.Orders$)

PrepSQLSelect3

This is one example of how you could use Prep to help you understand sub-queries. If you are interested in another example or have any other questions on how to learn SQL by using Prep please let me know!

HOW TO PREP to SQL – DATES

This is the second post in a series on how you can use Tableau Prep to learn SQL. In this post I’ll go over how to create custom dates in Prep & SQL. This may also be handy for people who know SQL and aren’t as familiar with the Tableau date functions. The SQL and Prep examples are all based on the Tableau Superstore data set. The SQL is created for MS SQL Server, if you are using a different flavor of SQL the syntax may be different.

The beauty of Prep is that there is point and click functionality that makes creating date dimensions easy. Need to change a timestamp to a date? Click on the field type icon in the column and change it to a date. Need to change a date to a year number? Click column that has your date field and click on the … Click on the menu that says Convert Dates and you’ll see these options. Note if you want to maintain the date field duplicate the field and do the transformation on the new duplicated field.

this is an image of a convert date menu in Tableau Prep. The options are date & time, year number, quarter number, month number, week number, and day of the month
build in convert date options in Tableau Prep

 

If you’re wondering how to create these date dimensions in SQL, the table below will have a snippet of code. The full query is listed below the table.

Date Type (field type returned)TableauSQL
Timestamp to Date (Date)DATE([Order Date]) or click on the field type and select dateCAST(O.[Order Date] AS DATE) AS ORDER_DATE
Day of Month (Number)DAY([Order Date])DAY(O.[Order Date]) AS DAY_NUM
Day of Year (Number)DATEPART(‘dayofyear’,[Order Date])DATENAME(dy,O.[Order Date]) AS DAY_OF_YR_NUM
Epoch Date (Number)DATEDIFF(‘second’,#1970-01-01#,[Order Date])DATEDIFF(s, ‘1970-01-01 00:00:00’, O.[Order Date]) AS EPOCH_DATE
Day of Week Name (Text)DATENAME(‘weekday’,[Order Date])DATENAME(WEEKDAY,O.[Order Date])
Day of Week (Number)DATEPART(‘weekday’,[Order Date])DATEPART(dw,O.[Order Date])
Week Start (Date)DATE(DATETRUNC(‘week’,[Order Date]))DATEADD(dd, -(DATEPART(dw, O.[Order Date])-1), O.[Order Date]) AS WEEK_START_DATE
Month (Number)MONTH([Order Date]) or use convert date menuMONTH(O.[Order Date]) AS MONTH_NUM
Month Name (Text)DATENAME(‘month’,[Order Date])DATENAME(MONTH,O.[Order Date]) AS MONTH_NAME
Last Day of Month (Date)DATE(DATEADD(‘day’,-1,
DATEADD(‘month’, 1, DATETRUNC(‘month’,[Order Date]))
))
EOMONTH(O.[Order Date]) AS LAST_DAY_OF_MONTH
First Day of Month (Date)DATE(DATETRUNC(‘month’,[Order Date]))DATEADD(DAY, 1, EOMONTH(O.[Order Date], -1)) AS FIRST_DAY_OF_MONTH
Max Month in Data Set (Date){FIXED : MAX(DATE(DATETRUNC(‘month’,[Order Date])))}(SELECT MAX(DATEADD(DAY, 1, EOMONTH(O.[Order Date], -1))) FROM Superstore.dbo.Orders$ O) MAX_MONTH
Min Month in Date Set (Date){FIXED : MIN(DATE(DATETRUNC(‘month’,[Order Date])))}(SELECT MIN(DATEADD(DAY, 1, EOMONTH(O.[Order Date], -1))) FROM Superstore.dbo.Orders$ O) MIN_MONTH
Today’s Month (Date)DATE(DATETRUNC(‘month’, TODAY()))DATEADD(DAY, 1, EOMONTH(GETDATE(), -1))
Year (Number)YEAR([Order Date]) or use convert date menuYEAR(O.[Order Date]) AS YEAR_NUM
Today’s Year (Number)DATEPART(‘year’,TODAY())YEAR(GETDATE()) AS TODAY_YEAR
Current Date Time (Timestamp)NOW()GETDATE() AS TODAY_DATETIME
Quarter Number (Number)DATEPART(‘quarter’, [Order Date])DATEPART(qq,O.[Order Date]) AS QUARTER_NUM
Quarter Start (Date)DATETRUNC(‘quarter’,[Order Date])CAST(DATEADD(qq, DATEDIFF(qq, 0, O.[Order Date]), 0) AS DATE) AS QUARTER_START_DATE
Quarter End Date (Date)DATEADD(‘day’,-1,DATEADD(‘quarter’,1,DATETRUNC(‘quarter’,[Order Date])))CAST(DATEADD(d, -1, DATEADD(q, DATEDIFF(q, 0, O.[Order Date]) +1, 0)) AS DATE) AS QUARTER_END_DATE

This is the query in SQL:

SELECT
O.[Order ID],
CAST(O.[Order Date] AS DATE) AS ORDER_DATE,
DAY(O.[Order Date]) AS DAY_NUM,
DATENAME(dy,O.[Order Date]) AS DAY_OF_YR_NUM,
DATEDIFF(s, ‘1970-01-01 00:00:00’, O.[Order Date]) AS EPOCH_DATE,
DATENAME(WEEKDAY,O.[Order Date]) WEEKDAY_NAME,
DATEPART(dw,O.[Order Date]) AS WEEKDAY_NUM,
DATEADD(dd, -(DATEPART(dw, O.[Order Date])-1), O.[Order Date]) AS WEEK_START_DATE,
MONTH(O.[Order Date]) AS MONTH_NUM,
DATENAME(MONTH,O.[Order Date]) AS MONTH_NAME,
EOMONTH(O.[Order Date]) AS LAST_DAY_OF_MONTH,
DATEADD(DAY, 1, EOMONTH(O.[Order Date], -1)) AS FIRST_DAY_OF_MONTH,
(SELECT MAX(DATEADD(DAY, 1, EOMONTH(O.[Order Date], -1))) FROM Superstore.dbo.Orders$ O) MAX_MONTH,
(SELECT MIN(DATEADD(DAY, 1, EOMONTH(O.[Order Date], -1))) FROM Superstore.dbo.Orders$ O) MIN_MONTH,
DATEADD(DAY, 1, EOMONTH(GETDATE(), -1)) AS TODAY_MONTH,
YEAR(O.[Order Date]) AS YEAR_NUM,
YEAR(GETDATE()) AS TODAY_YEAR,
GETDATE() AS TODAY_DATETIME,
CAST(GETDATE() AS DATE) AS TODAY_DATE,
DATEPART(qq,O.[Order Date]) AS QUARTER_NUM,
CAST(DATEADD(qq, DATEDIFF(qq, 0, O.[Order Date]), 0) AS DATE) AS QUARTER_START_DATE,
CAST(DATEADD(d, -1, DATEADD(q, DATEDIFF(q, 0, O.[Order Date]) +1, 0)) AS DATE) AS QUARTER_END_DATE,
FROM Superstore.dbo.Orders$ O

 

Hope that you found this helpful.

 

 

 

HOW TO PREP to SQL – PIVOT

A very long time ago, my first introduction to SQL was through the SQL view in MS Access. I would build a query and then look at the SQL view to see how it translated that into SQL. Using MS Access gave me a decent foundation in how to build queries and connect tables.

Fast forward about 20 years and Tableau Prep is now my go to tool to help people learn how to manipulate and clean data. The built in functionality in Prep makes it a breeze to get your data in the format you want, but, there is no SQL view in Prep to look at and learn from. This is the first in a series of posts on how you would replicate steps from Prep in SQL.

For this example I’ve connected to the superstore data set and build a simple flow that counts the number of orders by ship mode and segment. In Prep we would do this by adding an aggregate step (the sigma step) that counts the orders by segment and ship mode.

The next step is to pivot (or reshape the data). In this step I am moving the ship mode to the columns so this is a rows to columns pivot step.

This is how to achieve the same result by using SQL, First create a query that pulls the fields in from the Superstore data set.

SELECT (select is saying pick these fields)
S.Segment, (separate your selected fields with a comma)
S.[Ship Mode], (use [] when your field name has a space)
S.[Order ID]
FROM SUPERSTORE S
(from is where you set the data source. The “S” after the table name is an alias. I think it is a good practice to use alias, especially when you start joining tables)

This gets the initial data set that we want. Now we want to add on to this query to move the Ship Mode to the columns and count the order ids.

SELECT * FROM (this is new and we add this to select all of the fields from the statement)

(SELECT (this is the query from above)
S.Segment,
S.[Ship Mode],
S.[Order ID]
FROM SUPERSTORE S) O

PIVOT( (this starts the reshaping of the data)
COUNT(O.[Order ID]) (we want to get a count of the order id)
FOR O.[Ship Mode] IN (this specifies the values that we want to include in our pivot. These are the values in the ship mode field. you can enter all or just a few of the values)
([First Class],
[Second Class],
[Standard Class],
[Same Day])
) AS ORDERS_SHIPMODE

Here is the query without my notes:

SELECT * FROM

(SELECT
S.Segment,
S.[Ship Mode],
S.[Order ID]
FROM SUPERSTORE.dbo.Orders$ S) O

PIVOT(
COUNT(O.[Order ID])
FOR O.[Ship Mode] IN
([First Class],
[Second Class],
[Standard Class],
[Same Day])
) AS ORDERS_SHIPMODE

 

Please note this is based on SQL for MS SQL Server. Oracle has a PIVOT function as well but the syntax is different. If you are copying the above query be sure to remove my comments in the () in the statement.

Hope this helps you understand how to do a simple pivot in SQL.

 

January 2020 Sports Viz Sunday

The topic for the first #SportsVizSunday of 2020 is personal health data. I took some leeway with the topic and looked at my golf handicap index and scores. I normally walk the golf course and golf impacts my mental health (sometimes positive and sometimes negative). There were a few times this year where I thought about buying a boat.

For #SportsVizSunday, I wanted to look at where my index fell in relation to other women who keep a handicap and highlight the scores that count towards my current index. As with most public work I do, I like to keep it simple. I spend a lot of time during the week working on dashboards so in my free time I tend to keep it light and simple.

The 2019 season was a bit all over the place for me. I struggled with my irons for the last two seasons and that definitely impacted my score. While that aspect was off the rest of my game was in good shape and that helped me get my handicap index down to an 18.4.

I play most of my golf at two different courses and wanted to see what my score and differentials looked like at those two courses. I felt like I played better at Furnace Brook because I hit my fairway woods and hybrid more than I hit my irons. The data backed that up. I scored better (based on differential) at Furnace Brook than at William J Devine.

my differential at the Brook was 4 strokes lower than at the Park

In 2020 I’m going to track more of my golf stats and visualize them to see where I can get better. I know where I struggle with my game, but, seeing the data makes it a bit more real.

#PreppinData Week 1 2020

The first #PreppinData challenge for 2020 was to recreate Ryan Sleeper’s financial statement table using data from the fictitious Chin & Beard Suds Co.

To do this we need to:

  • get the profit totals
  • get the profit subtotals
  • update the blanks with the total & subtotals
  • indent the 2nd & 3rd level data

This is the initial view of the data. Based on the naming convention of the Item my first thought was to split off the number portion of the field and use that as a way to create a hierarchy.

PreppinDataWeek12020S_1

I used the custom split tool to parse the field off a space and took the first field. I trimmed any extra spaces and renamed this field Item Id. These are the calculations:

PreppinDataWeek12020S_2

I then created by hierarchy levels taking the left X number of characters from the new item id field. My thought was I would use these to get the totals & subtotals. These are the fields I created for the hierarchy:

PreppinDataWeek12020S_3

I also created a new field with the indented item names:

Format Item Name

IF LEN([Item ID]) = 2 THEN [Item]
ELSEIF LEN([Item ID]) = 3 THEN SPACE(5) + [Item]
ELSEIF LEN([Item ID]) = 5 THEN SPACE(10) + [Item]
END

 

After I had my levels I then created two aggregates to get the totals and subtotals. The first one sums the profit by my new top level field and the second one sums the profit by the new second level field. I joined both of these aggregates back to the prior step where the top level in to totals aggregate = the item id & where the second level = the second level.

PreppinDataWeek12020S_5.PNG

The last step is the clean up step. I this step I have 11 changes to the joined data.

  • remove the duplicated fields from the joins.
  • merge the profit field from the initial step with the profits from the aggregates
  • rename the merged profit fields to profit
  • created a calculated field to get the length of the item id field to sort my rows correctly
  • renamed the Format Item Name field to item name
  • removed any remaining unnecessary fields

PreppinDataWeek12020

This was a great challenge to kick off the 2020 #PreppinData series. I love the formatting idea from Ryan and have a few ideas of how I can implement both Ryan’s table and this PreppinData challenge in my day to day work.

If anyone is interested in getting a copy of my flow please let me know. I am more than happy to share my approach.

Happy Preppin!

 

 

 

 

 

 

 

 

 

 

 

 

Different Ways of Getting the Same Result

There are usually a number of different ways to get to the same result in Tableau & Tableau Prep. Week 9 of #PreppinData is another example of this.

For this edition of #PreppinData we looked at Chin & Beard Suds Co’s Twitter complaints. We were given a list of complaints and asked to:

Remove Chin & Beard Suds Co Twitter handle
Split the tweets up in to individual words
Pivot the words so we get just one column of each word used in the tweets
Remove the 250 most common words in the English language (sourced from here for you: http://www.anglik.net/english250.htm)
Output a list of words used alongside the original tweet so we know the context for how the word was used.

Here’s the flow I created

Week9

The first clean step splits the text on the space character. The next step pivots all of those splits back together in one column. In this pivot I used a wildcard pivot where the field name contained Tweet –

week9c

After consolidating the splits, I did a few clean steps to get the two sources ready for joining. Anytime that I join on text I always make sure to trim all the extra spaces and make the text fields either upper or lower case. I think this is a good habit to be in for text matching. I also excluded the company Twitter handle and any null records in this step.

week9d  week9e

Now it is time to join them together. The first join I did was an outer join between the tweets and the list of the 250 words. In the step after the join I kept only the records from the 250 list that were null (I used the rank field).

The other way this join can be done is with the left unmatched only join type. When you use this join type all you need to do is remove the two fields from the 250 list.

Initially I didn’t think of the other join type and found a way to get to the final result. Going back and looking at the joins the second option is probably the better way to go. There isn’t a wrong and right way to do it just different approaches.

Happy Preppin!

#PreppinData

I just finished week 2 of the  Preppin Data challenge and wanted to walk through my approach. One of the things I love about Tableau and Tableau Prep is that there are a number of different ways to get at the same result.

This week Carl & Jonathan gave us a file that had a big header, names that needed to be cleaned, and metrics that needed to be moved to columns. The output needed to be 6 columns and 14 rows.

After setting my connection to the file the first thing I did was check the Use Data Interpreter box. This helper removed the unnecessary header at the top of the file.

PreppinWeek2_A
check the data interpreter box to remove the unnecessary header

Whenever I built something in Tableau Prep I like to always add a clean step after my connection to get a sense of what is in the data. When I did this I noticed that my city field had a value called “city”. I knew from looking at the initial file that this was a secondary header so I right clicked on the value of city and selected exclude.

PreppinWeek2_B
remove the secondary header row

At this point I also added an aggregate to see how many rows were in my data set. I like to add these as I build out a flow to get a sense of how my record counts change as I build out different steps.

I added another clean step and I did this because I like to partition out my changes when I build something new (I’m quirky). I could have done these all in the first step. In this step I grouped the various city names by pronunciation This took care of all but two values. I edited the group and manually added “nodonL” to London and “3d!nburgh” to Edinburgh. In this step I also created the new header field which combined the metric and the measure and then removed those fields as they were no longer needed.

PreppinWeek2_C
used pronunciation group by with a manual add, created new field and removed unnecessary fields

The next step was to move the values from the rows to columns. This is done in a pivot step. Most of the people I help with Prep think Pivot = Pivot table and are confused when they add that step. Pivot will reshape your data.  My pivoted field is my new field that I created in the prior step and my field to aggregate is the value field.

PreppinWeek2_D
this moves the data from rows to columns

At this point I also added an aggregate step to make sure I had 14 rows as the instructions called for. This is the full view of my flow.

PreppinWeek2

Thanks for reading and happy preppin!