morality and grocery shopping

I haven’t stepped foot in a grocery store in almost 7 weeks. Since Massachusetts went into a stay at home advisory I’ve avoided going into stores and have relied on Instacart, Amazon Fresh, USPS, UPS, ShipIt, and a local meat farm for my food and necessities. Every time I use these services, I feel guilty and struggle with the decision. Is it morally right for me to pay someone to do something I am uncomfortable doing?

My moral compass has always been guided by fairness, and it has been strong moral compass. I’ve made people uncomfortable when I refused to laugh at a joke that degraded someone because they were different and made career decisions based on how teammates were treated. And now I’m not sure if I am off course. Is it morally right for me to expect someone to do something I am uncomfortable doing?

I assume that the people who have been shopping for me are in a different situation than I am. We are DINKs and my husband and I are still employed. I recognize that we are lucky. While I don’t know for sure, I would guess that the people who have been doing my shopping are putting themselves in at risk because they need to. When I’ve discussed this with friends, they’ve tried to re-assure me by saying you’re supporting someone who needs work. All I can think is that I am putting someone in potential danger. Am I harming someone because I don’t want to do something?

I see multiple sides of this and have come to the realization there is no real answer to this. We’re all trying to do the best we can to make the right decisions.

Virtual Prep TUG Example

I was honored to present at the first virtual Tableau Prep user group session and was both thrilled and a bit nervous when I found out 1,200 people had registered for the session. Thrilled because I was glad to see that there were so many people interested in Prep and nervous because I hadn’t presented to that many people before. It was a great TUG and I learned something from my co-presenters Joshua, Jenny, and Rahim and am appreciative of Jack & Caroline’s efforts in making the TUG happen. The link to the recording and the packaged flow are posted on the Tableau forum.

My presentation was based on a real life example where I had to reverse the management levels in an employee hierarchy. The people hierarchy file has an employee’s immediate leader as their first management level, their immediate leader’s leader as their second and then works up the hierarchy to the company leader. I need to reverse that so the company leader is the first management level and then works down the hierarchy so the immediate leader is in the last hierarchy level.

As you can see in the Excel employees have a different number of leaders so the top level leader could be in any one of the columns. When I was working through this I noticed all of the management levels headers have a number in them and my initial thought was I could use that number as the way to reverse the levels.

After connecting to the file in Prep I added an initial clean step and created one calculated field called Dummy Field for Aggregate Joins with “A” as the calculation. (I’ll get back to why I created this towards the end of the post.)

I wanted to extract the number from the management level headers and use that to create a new reversed level. In order to extract the number I need to reshape my data with a pivot step. Moving the columns to rows puts all of my managers in one column and creates a new field called Pivot 1 Names which were my old column headers.

After the pivot I added a clean step and made 7 changes:

  1. excluded any records with a null management level
  2. duplicated the pivot names field
  3. use the built in clean function to remove all the letters from the field created in step 2
  4. removed all of the spaces from the new field
  5. changed the field type from a string to a number
  6. renamed the new field management level
  7. removed the Pivot 1 Names field

These steps created a new field with the number from the management level header. I duplicated the original pivot names field because I am in the habit of keeping the original field as a comparison point when I change the values. You do not have to do this, it is just a personal preference.

The built in clean tools to remove letters and spaces is accessed by selecting the column you want to clean and then clicking on the 3 dots. That opens a new menu and you’ll see the clean option.

The next step was to get the max management level number for each employee. When I have the max level I will be able to subtract that from the management level we pulled out of the header to get the new management level. To get the max level I added an aggregate step, grouped by the employee and added the management level number to the aggregated fields and changed the calculation to max. I then joined that max level back to my data to add that field in. Note that in the latest version of Prep you can now do this with a LOD (level of detail) calculation this functionality didn’t exist when I created the flow.

Now that I have the highest management level for each employee I can subtract that from the management level and add 1 to that to get the reversed level. I created this calculation ([Max Mgmt Level] – [Manager Level]) +1. I also created a new header field with the reversed level with this calculation “Level ” + STR([Reverse Mgmt Level]) + ” Manager”

In this snippet of data you can see that Albert Norman has Brittany Newman has his 1st management level, his highest management level is 5. When that is reversed Patti Reed who is the top level is now the level 1 manager and Brittany is the level 5 manager.

I cleaned up a few fields and then added another pivot to move the new management levels back to columns. This pivot is a rows to column pivot and because I know there is only 1 value for each level I am taking the min of the manager name.

The last thing to do is to add Patti Reed back to the cleaned data. Patti is the CEO of Fake Company and does not have a management level. When we excluded the null management levels after the first pivot she was removed from the data set. I created a branch for just Patti and unioned that back to the cleaned data set.

Earlier in this post I mentioned that I created a dummy field with “A” in it. I like to add aggregates into my flows to check the record counts at different stages of the flow. I got in the habit of creating these and exporting them out because I often work with sampled data. Creating the dummy field allows me a way to join the aggregates together and validate my record counts. If you’ve downloaded the flow you’ll see these aggregates and the step that exports the counts.

Thanks for reading and I hope this example was helpful. If you have any questions please feel free to get in touch. Happy Preppin!

Preppin Data Week 14

The #Preppin Data week 14 challenge asked us to determine the impact of implementing a meal deal at Whyte’s Cafe. I thought this was a good challenge and per usual I broke the steps out into a number of steps. Seeing the solutions for the weekly challenges made me realize that I usually have a different approach than the solution and that I like to break things out into multiple steps and this week was no different.

Here is what my final flow looks like:

PreppinWeek14

I connected to the .csv file and inserted a clean step. The steps say to replace null prices to 1.5, however, I didn’t have any nulls when I imported my data so I skipped that step. They also stated to replace the null member ids with a zero. This is the only action in my first step. To do this I right clicked on the null in the member id field and selected edit value and replaced the null with 0.

The output needed to have the total ticket price, the number of meal deals, the total prices for items not included in the meal deal, and the difference between the total price and the meal deal adjusted price. I determined these through 3 different branches.

The first aggregate I created was to determine the number of meal deals per ticket. To get this I needed to know the number of items per ticket.

Week14ItemAgg

After the aggregate I added a pivot step to move the type from rows to columns. I did this so I could use the columns to determine the number of meal deals per ticket.

Week14ItemPivot

I then inserted a clean step to create the meal deal flag and the number of meal deals per ticket. This is my approach:

  • determine if the ticket has items that are meal deal eligible
    • [Drink] > 0 AND  [Snack] > 0 AND  [Main] > 0
  • right clicked on the new meal deal eligible field and kept true
  • determine number of meal deals per ticket:
    • IF [Drink] <= [Snack] AND [Drink] <= [Main] THEN [Drink] ELSEIF [Snack] <= [Drink] AND [Snack] <= [Main] THEN [Snack] ELSEIF [Main] <= [Drink] AND [Main] <= [Snack] THEN [Main] END

The next step was to get the average cost per item and ticket. I created this to use to determine the excess cost later in the flow. Because drinks, mains, and snacks all have different prices I used the average by type to get the excess cost.

Week14ItemAvgCost

I also pivoted the types to columns and then added a clean step. In the clean step I replaced any null values to a zero.

The third aggregate totaled the cost per ticket.

Week14TicketTotal

I added a clean step after the aggregate to round the total cost to 2 decimals. I used this calculation ROUND([Total Ticket Price],2).

After the aggregates were ready to go I joined the 3 branches together and started the final steps. These are the calculations I did in the costs step:

  • determines meal deal cost:
    • Meal Deal Total: [Min Meal Deal Item] *5
  • determine number of items that aren’t part of the meal deal (3 different calculations):
    • [Drink_Items] – [Min Meal Deal Item]
    • [Snack_Items] – [Min Meal Deal Item]
    • [Main_Items] – [Min Meal Deal Item]
  • determine cost of excess items:
    • [Excess Drink Items] * [Drink_Price]
    • [Excess Snack Items] * [Snack_Price]
    • [Excess Main Items] * [Main_Price]
  • determine excess total:
    • ROUND([Excess Drink Cost] + [Excess Main Cost] + [Excess Snack Cost],2)

Because I like to break things up I inserted another clean step. These are the calculations I used to get the final output.

  • determine the new total price using the meal deal:
    • [Meal Deal Total] + [Excess Total]
  • determine the cost difference:
    • [Total Ticket Price] – [Total Ticket Meal Deal Adj]

My meal deal items and costs matched the solution output but my prices do not and I haven’t been able to figure that out. I have done pivots in excel to get the cost by ticket and they match what I have in my output. I decided to not get too hung up on that an use this week as a concept and not exact match.

I enjoyed this week’s challenge and I continue to enjoy seeing the different approaches to the challenge each week.

 

Fairway Ladies Year in Review

My goal for 2019 is to produce more personal projects on my Tableau Public page than I have in the past. I’m kicking that off by looking at the 2018 golf season for my golf group. The Fairway Ladies of Franklin Park play at the William J Devine golf course in Boston MA. To get a high level summary of what our season looked like I pulled a report from GHIN (the program we use to keep our handicaps).

I designed the dashboard to be a wide layout and kept it with a simple color scheme. I’m pleased with how it turned out. I went back and forth on the score differential chart a number of times and finally settled on the bar code chart.

I’m looking forward to doing more of this in 2019!

Fairway Ladies 2018 Season

2018 Tableau Acknowledgements

To keep with the popular year end theme of year in review here is my list of acknowledgements to the Tableau Community in 2018. These are in no specific order.

Susan Glass (@SusanJG1& Paula Munoz (@paulisDataViz )  – I “met” Susan & Paula via Twitter and then got to meet them in person at the Boston Tableau User group this year. I’ve been a sporadic BTUG attendee for years but never really met anyone at these user groups. Sometimes the user group meetings felt like riding the T – unless you already knew someone on the train you avoid eye contact with everyone else. It is great to have some real live Tableau friends now.

Tom O’Hara (@taawwmm– Tom is Tableau support at Comcast. The range of questions he answers on our internal Slack & Teams Tableau boards is amazing. He’s always helpful and supportive. I hosted Sports Viz Sunday in September and was thrilled that Tom supported me by entering a viz. It’s great to have work colleagues support your personal Tableau endeavors.

Josh Tapley (@josh_tapley& Corey Jones (@CoreyJ34 ) Josh & Corey run the Philadelphia Tableau User group and gave me my first opportunity to present at a TUG. I did a live demo Tableau Prep and enjoyed presenting more than I thought I would. I was also blown away by Corey at the TUG. There were a number of St. Joes students who presented their work after they were done Corey acknowledged something he liked about each one of their vizes.

Ann Jackson (@AnnUJackson ) & Luke Stanke (@lukestanke ) – Ann & Luke put out my favorite podcast Hashtag Analytics https://bit.ly/2QTiAJW. Their podcasts are great conversations on data and the Tableau community. You’re missing out if you aren’t listening to these.

The SportsVizSunday Guys (@SimonBeaumont04 , @JSBaucke , @sportschord ) – Thank you for asking me to host #SportsVizSunday in September! Being asked to host September’s challenge was big for me. This was the first time I’d been invited to be more involved in a data viz project. There is a big Tableau community on Twitter and at times I’ve felt a little lost because I don’t create flashy work and I don’t have a gazillion followers. When Simon asked me to host I felt great. We all like to be recognized from time to time (even the introverts like this).

Sarah Bartlett (@sarahlovesdata ) – Sarah is the Tableau ambassador on Twitter. No one else in the community welcomes and supports people like Sarah does. She also promotes new folks every week with #Tableauff. She’s also got mad skills and it was awesome to see a women in the IronViz Europe finals this year.

Chantilly Jaggernauth (@chanjagg ) & Amar Donthala (@AmarendranathD )- Chantilly & Amar created a Millennials & Data (millennialsanddata.com ) program this year to prepare millennials to enter the data driven world. This is in addition to their full time jobs at Comcast. Their first cohort of 16 produced amazing work and they all passed their Tableau Desktop Specialist Certifications. I see great things in the future for Chantilly & Amar!

There are a number of other folks who have influenced me in 2018. This list is my no means inclusive of everyone but these are folks that I wanted to highlight.

R&D Makeover Monday

In last week’s Makeover Monday recap Andy reminded us that this is a makeover. The intention is to evaluate what is good and what can be improved with a viz and create a new one with those points in mind. People can use makeover Monday for what they what but the intention is to improve upon the selected viz.

I normally try to take that approach but I don’t often document what I like and what can be improved so for the next few weeks I am going to attempt to put my thoughts and approach together here.

The viz this week comes from HowMuch.net and looks at R&D spending across the globe.

R&amp;D-AROUND-THE-WORLD-22c6

I like that the person who created this tried a different approach to displaying the information. They want the reader to focus on the large circles in the middle for the US, China, Japan, and Germany. What I think can be improved is the amount of clutter in the viz. There is a lot going on here between the circles, the map, the flag, and the multiple colors. I think a better approach would be to simplify the viz and draw the attention to the top 5 countries. I don’t think the flag and the country shape add to the story so I would remove them.

I selected a treemap for this week’s makeover. While treemaps may not always be the best option to compare values I think in this case it works because I want to highlight the contribution of the top 5 countries and I don’t want to compare all of the countries against each other.

RD Spend

Overall I think this meets the goal of drawing attention to the top countries.

Swing Your Swing

Last week the Dick’s Sporting Goods ad with Arnold Palmer popped into my head. If you haven’t seen it before it’s worth the 50 odd seconds.

In the add Palmer says:

“Swing your Swing.
Not some idea of a swing.
Not a swing you saw on TV.
Not the swing you wish you had.
No, swing your swing.
Capable of greatness.
Prized only by you.
Perfect in its imperfection.
Swing your swing.
I know I did”

I see “swing your swing” as be true to yourself in your approach. In golf all that truly matters is the contact with the ball. How you get to that point is where you “swing your swing”. There are methods and instructions that make it easier to square the club at impact, however, do what works for you. Explore your swing. Figure out your tendencies good and bad. Swing your swing. Don’t become so mechanical that you lose sight of you.

Because I have a golf “issue” I immediately applied this my data viz work. My goal in data visualization is to depict data in a manner that clearly communicates the insights in the data. Like golf, there are a number of best practices and methods on how to do this. And like golf there are different approaches to get to the same end point. Take a look at #makeovermonday or #dataforacause and you will see an number of people approach the same data set in a number of different ways. You’ll see things that run the range of simple bar charts to radial charts. This is where people “viz their viz”.

In both golf and data viz your personal style is important, however, if your style trumps your success or ability to communicate effectively you need to refine your style. It is hard to square the club consistently when you come over the top and it is hard to communicate data effectively when you have a pie chart that uses 20 different colors.

So how do you get there?

Practice, practice, practice.
Experiment – #makeovermonday is a great opportunity for this.
Learn from others – be inspired by their work but don’t seek to duplicate someone else’s style.
Have fun!

 

 

makeover Monday week 7

This week’s exercise looked at Valentine’s day spending in the US. I liked the original viz – the color scheme seemed appropriate for the topic. I liked the images and felt the size and images conveyed what they were intended.

After setting up the data set I started creating the calculated fields I needed:

  • The first was to create a date field from the Year field in the dataset – DATE(“02″+ “/” + “14” + “/” + STR([Year])) .
  • I then created a couple of measures fields for the % Buying and the Avg Net Spend – IF [Metric] = ‘Percent Buying’ THEN [Measure] END and IF [Metric] = ‘Net Average Spend’ THEN [Measure] END

I wanted a custom shape for whatever I ended up creating – I found a free clip art heart online bought that into PowerPoint did a couple of updates to it and saved it to my custom shapes file.

After creating a few different views I decided to keep it simple and focus on what people were buying for Valentine’s day from 2010 – 2016. I tried line charts and bar charts with the hearts and then thought this may be a good time to give a bump chart a try.

Matt Chambers has a great post on his site that walks you through how to create a bump chart and I used that as a refresher. In order to get the bump chart to work I had to create a couple of more calculated measures:

  • Rank for the % Buying – RANK_UNIQUE(SUM([% Buying]))
  • Prior Year Ranking – LOOKUP([Rank % Buying],-1)
  • Difference From Prior Year – IF [Rank % Buying] > [Prior Year Ranking] THEN ‘down from the prior year’
    ELSEIF [Rank % Buying] = [Prior Year Ranking] THEN ‘the same as the prior year’
    ELSEIF  [Rank % Buying] < [Prior Year Ranking] THEN ‘up from the prior year’
    END

I wanted the prior year and difference from prior year for the tooltip.

After getting the bump chart working I tested out a couple of different color schemes and found the purple to be a bit easier on the eyes than the red I had intended on using.

There is a lot more I could have done with the dataset this week, but, overall I’m pretty happy with what I created. The color scheme is different for me and I was happy with the custom shape and the bump chart. For the next few months I’m going to experiment more with the design side.

momvalentines