Modeling options: Getting the best return on investment

A few weeks ago I shared something with my friends and family on Facebook that got quite a reaction. It was an article entitled McDouble is ‘cheapest and most nutritious food in human history’ [1].

The originator of the claim, Stephen Dubner, who co-authored the best-selling book Freakonomics went on to say:

The double cheeseburger provides 390 calories, 23 grams of protein – half a daily serving – seven per cent of daily fibre, 19 grams of fat and 20 per cent of daily calcium, all for between $1 and $2, or 65p and £1.30.

Kyle Smith, a New York Post columnist, agreed:

“For the average poor person, it isn’t a great option to take a trip to the farmers market to puzzle over esoteric lefty-foodie codes”, Mr Smith wrote.

“Facts are facts – where else but McDonald’s can poor people obtain so many calories per dollar?”

It was a ridiculous claim, a controversial one at best and yet one that the data showed to be true. It got me thinking about what Return on Investment (ROI) and how we model it to help better decision making.

ROI measures the amount of return on an investment relative to the investment’s cost.

[2]

In the example above the McDouble returns the highest number of calories per dollar. To phrase it in a way that the ‘business’ might describe a potential solution, it “delivers the biggest bang for the buck”. http://idioms.thefreedictionary.com/bang+for+the+buck

As a Business Analyst I strongly believe that it’s my job to table options to my stakeholders and to back up the options with data, modelled in a way that helps them make informed decisions. I know many will find that difficult, stating that it’s not the role of the BA to provide solutions, but I’m not actually suggesting that here (although for the record I disagree – that’s a different blog post). What I’m suggesting here is that however ridiculous or unpalatable the solutions suggested are, it’s the role of the BA to model benefits and potential outputs to aid stakeholders in making a decision.

Lean tells us we should validate a hypothesis with the minimum amount of effort, this sounds like the “maximum amount of calories for the dollar to me”.

In Agile delivery we would say that an MVP can be defined as the least amount of work we can do to in/validate the hypothesis.

Modeling options is the the best way to ensure that you’re helping stakeholders make informed decisions (and gives them the ability to defend themselves when they get caught eating a burger!)

[1] Daniel Johnson. Jul 30, 2013. “McDouble is ‘cheapest and most nutritious food in human history'” [online]. The Telegraph
[2] 2014. “Return On Investment – ROI” [online]. investopedia.com

Applying ‘Persuasion, Emotion and Trust’ PET Design Theory in the Real World

This month I was fortunate enough to have been asked to speak at a conference, alongside my colleague Chris Bush, on the topic of PET ™ Design Theory, “Persuasion, Emotion and Trust”. The conference was being hosted in Gothernburg, Sweden by Maverick, a Swedish based digital agency by Sigma.

So what is PET Deisgn?

Well Human Factors International (HFI), the authors of PET design say:

PET design uses research-based techniques to leverage the science of Persuasion, Emotion, and Trust to make websites more engaging, compelling, and effective. This innovative methodology complements classic usability, empowering companies to achieve their business goals.

Presentations and Resources

We presented two sessions on the topic, an introduction to PET and then a session looking at some examples of the PET design applied.

Introducing (PET ™) Design Theory, “Persuasion, Emotion and Trust”

Applying Persuasion Design and PET ™ Theory in the Real World

Camp Digital 2013 Presentation

At our recent Camp Digital 2013 event, my colleague Chris Bush presented on the same subject, his slides are available here:

Card Sorting: Designing user informed categories

Card sorting is a technique for informing Information Architecture. It allows the architect to collect data on users categorisation preferences and gives them an opportunity to observe users discussing their reasons behind their choices.

Over at Boxes and Arrows Donna Spencer and Todd Warfel describe it as:

A great, reliable, inexpensive method for finding patterns in how users would expect to find content or functionality. [1]

An example

Imagine you’re standing in the supermarket and you’ve been given a list of items to purchase. The list is in no particular order and is completely uncategorised. You read down the list:

  • Shopping List:
  • Chicken
  • Oranges
  • Sweetcorn
  • Cotton wool balls

In your head you’re already starting to think “Is the Chicken in the meat aisle or the freezer section”, “Do I find the Cotton wool balls with the baby stuff or with the cosmetics”

This is what a user is doing when they come to your site for the first time. They’re trying to find something and they’re looking at the sections on your navigation and trying to work out where to go first. Which section titles best describe what they are looking for.

So how does Card Sorting help us with this? Well we do something similar to standing our users in a supermarket with a shopping list, except this time we ask the to group the items and name the groups in a way that makes sense to them.

Note: The following method is for an observed/moderated experiment. There are a number of great tools available on the web to help you setup remote/unmoderated experiments, however, I find that while these are great for producing a lot of data quickly, you lose the context provided by observing the users as they discuss cards and move them between categories.

Before you start

Picking your cards

I work on the basis that 75 cards will take your users about an hour to sort. So consider how many cards your users will be able to sort effectively in the time available. Also consider which pages to sort. You should strip out any category pages, as these will be replaced by your users newly named groupings. Typically you should be looking at 2nd/3rd level pages.

Preparing your cards

There are 3 things that I’d recommend each of your cards having:

1. A unique ID
This is purely for administrative purposes. It allows you to identify the cards during the analysis phase. Make sure it’s random and doesn’t relate to the section that the page currently lives in (if you’re experimenting with an existing site). Make it small, you don’t want to lead to the users into groupings based on the ID.
2. A page title
If you’re testing an existing site you may be able to pull these straight from the page titles in the metadata. However, if your page titles have been optimised for SEO (or spammed with keywords) you may want to clean these up so that you don’t mislead the user.
Watch out for any pages that may have duplicate names like “FAQs”. Some sites have one of these pages per product or per section, consider renaming it to “Product Name – FAQs”.
3. A short description
Again this may be able to be pulled from your pages metadata. The same consideration applies to SEO optimised descriptions. If the page title is a little ambiguous use the description to supply some context to the page.

Here is an example based on our supermarket scenario:

Free range corn-fed British chickenThis delicious and richly flavoured bird has a characteristically golden colour to the flesh, which comes from its maize based diet. The hens are reared in the same way as our other free range birds. 

#1

If you’re any good with a spreadsheet I’d recommend setting up a mail merge to print these. Depending on who you’re asking to take part, their location and the time between the sorts talking place and being able to start the analysis, you may need several sets of cards printing.

Recruiting users

Consider who you’re going to recruit to represent your users. Make sure that your sample is as representative as possible of gender, age, nationality, skill and role/user type.

Card sorts can be done with an individual user but you tend to not get a lot of discussion with users sharing their thought process or opinions. For this reason I’d suggest groups of 2 or 3 users so that there is plenty of discussion, you’ll be able to hear where there is consensus and where there is disagreement.

When recruiting users I would recommend a minimum of 5 groups, as this will give you 5 unique sets of data to analysis. While this is just my opinion it is loosely based on Jakob Nielsen’s Why You Only Need to Test with 5 Users.

If your site or application has a diverse range of users then perhaps consider 3 groups of each user type, or persona if you have created these.

Grab a copy of the analysis spreadsheet

Donna Spencer, whose definition of card sorting I used to open this post, has done a lot of work in the area of card sorting. This includes the production of some fantastic resources and materials to help you with your card sorting.

While the post is 5 years old the resources are regularly updated to ensure that formulas continue to work in the latest versions of excel etc. So for that reason I’d like to point you directly to Donna’s post on the Rosenfeld website.

Running the experiment

After the planning, the preparation of the cards, recruiting your users and scheduling the sorts to take place – running the actual sorts is relatively straight forward. However, there are some things that you can do to make the sorts go more smoothly:

Have a facilitator

If you’ve run any workshops in the past you’ll be more than use to facilitation. You should consider using the same skills to facilitate an experiment. The sorts of things that you should lookout for are as follows:

  • That everyone in the group is having an equal input
  • That the group is working as a team and not working against each other
  • That the group remains on topic

Do your best not to lead a group into making decisions, but you can advise or clarify to some extent if the group are running into real difficulty.

Run through an example

Just as I have given the example of a supermarket in this blog post you should make up 10/15 cards in advance that you can show the group. I like to use items from around the house. This quickly shows that items can be categorised by room (bedroom, lounge, kitchen etc) or by type (electrical, furniture, soft furnishings etc).

Be sure to explain that both are valid ways of categorisation and that the users should go with the approach that they are most comfortable with.

Observing and scribing

As well as having a facilitator in the experiment I would suggest having someone available to observe and scribe. If you have access to an observation lab the scribe could sit in the observation suite taking notes and recording the discussion and actions made by the group. It is not essential that your scribe be hidden away, I have been known to sit in on the experiment, recording, note taking and in some case photographing the groups activities.

If you are unable to get a scribe it as at least worth considering recording the session. Most iPods/iPhones will record through the headphone socket using the headphones straight to mp3 and you’ll be grateful of having something to go back to when you’re writing up your findings.

Spare cards and envelopes

Although this is a relatively low-cost experiment you’ll need to raid the stationary cupboard a few times…

It’s always worthwhile having some blank cards with you. Some users may want to split content or duplicate it into multiple categories, like the cotton wool balls that live in both ‘baby stuff’ and ‘cosmetics’. Allowing your users to duplicate the card by hand and putting it in both categories will act as an aid when it comes to the analysis.

You should also have a number of large envelopes with you. I like to take large A4 envelopes with me and allow the group to write on them and use them as placeholders for their categories. At the end of the experiment you can quickly collect up the cards and store them in the envelope that they are piled on. If you are doing more than 1 sort on the same day ensure you write which sort the cards belong to clearly on the envelopes.

The analysis

Once you’ve populated your spreadsheet with all your categories and cards from your sorts its time to start standardising your categories. When doing this you’re looking for 3 things, in my opinion:

1. The groupings of the cards into categories
Irrespective of the categories name, you want to look out for agreement by users that a set of cards should be categorised together.
2. The names given for the same category
Once you’ve identified that all your users put the cards bread, french stick and bagels into the same category then it’s time to review what they called those groups: 

  1. Breads
  2. Baked goods
  3. Bakery

You will have to come up with a naming convention and try to be consistent across all your categories. It may even be wise to consider search term traffic if you are looking to optimise your site for search, when picking a category name.

3. Spotting the dump category
When undertaking your analysis be mindful of the users that were in the groups. Consider their understanding of the content and if they are the target audience for all your content. I sometimes refer to something I have observed in the past as ‘the domain specialists vs. the dump category’.I’ll give you an example, where in a supermarket would you find Quorn (imitation meat mycoprotein) or a USB stick? If you’re a vegetarian or cook for vegetarians there is a good chance you know exactly where to go for the first item. Likewise, if you are into gadgets and are tech-savvy there is a good chance you’ll be able to find a USB stick in a supermarket and you’d know where to put it in a card sort. However, if you are neither a vegetarian or techie then you may simply give up. You may even create a category called ‘Misc’ and dump cards that you don’t know where they fit into it. 

More on analysis

There is quite a bit more to the analysis of card sorts, probably enough for another blog post on its own. As I suggested earlier, some of the best resources on ‘using the spreadsheet’ can be found on the Rosenfeld resource site.

Testing your new categories

So once you’re pretty happy that you have your categories, you have the items in them and you have determined a logical hierarchy, it’s time to test your IA. The following 2 methods can help to validate your IA:

Treejack

Treejack is an IA validation tool that allows you to test your IA without visual distractions.

You can create tasks for a user to undertake such as “You want to buy some fresh bread to make bacon sandwiches at the weekend, where would you locate the fresh bread?”.

The tool tracks the users path through the navigation and produces reports on their outcomes.

Task based testing

Once you have Treejack tested your IA it’s time to include it in any prototype that you would build during the design phase of the project. At Sigma we are big advocates or developing prototypes for testing, prior to sites or applications being developed.

Both these approaches will give you insight into how users interact with your categories and the content within them. Based on the results you may want to refine your IA further and re-test.

Conclusion

There are a lot of steps to this post and you may need to refer back to them at different times. The steps to a successful card sort are:

  1. Pick & prep your cards
  2. Run your experiment
  3. Analyse your results
  4. Test your findings

Card Sorting: Designing Usable Categories by Donna Spencer 

For more on Card Sorting I strongly recommend this book. In it, Donna describes how to plan and run a card sort, then analyse the results and apply the outcomes to your project.

It’s a great addition to the bookshelf and a great reference before, during and after a card sort.

[1] Spencer, Donna & Warfel, Todd. 2004. “Card sorting: a definitive guide.” [online]. Published on 7th April 2004. http://www.boxesandarrows.com

Prototyping: understanding the benefits to fidelity

If you’ve been around any software development projects then there is a good chance that you’ll have seen wireframes. Some sketched, maybe some mocked up in prototyping tools or crafted in photoshop. They are a popular tool to help conceptualise page layouts or even go as far as to demonstrate interactions in an application.

The level of detail or interaction that is present in a prototype is often described as fidelity. Fidelity is the degree of closeness to the “depth, breadth and finish of the intended product” (Hakim & Spitzer) [1].

To help explain the difference between low and high fidelity the following definitions are from Usability First:

low-fidelity prototype

a prototype that is sketchy and incomplete, that has some characteristics of the target product but is otherwise simple, usually in order to quickly produce the prototype and test broad concepts. [2]

These are the prototypes that you’d sketch in your notepad when describing the layout of a page or the interaction of an element with a colleague. You may capture them in a meeting with a client as a visual record of what has been discussed, so that you can work them up later into a design concept or higher fidelity prototype.

high-fidelity prototype

a prototype that is quite close to the final product, with lots of detail and functionality. From a user testing point of view, a high-fidelity prototype is close enough to a final product to be able to examine usability questions in detail and make strong conclusions about how behavior will relate to use of the final product. [2]

Unlike the quick sketch in your notebook, a high-fidelity prototype is at the other end of the scale. It’s likely to be rich with interaction, styled graphics and layouts that give the appearence of the finished product.

These are the two extreme end of the scale and in reality you will often be working somewhere between them. Some prototypes will have interactions but not so much visual treatment. Others will be worked up to a high level of design treatment but are just flat images, produced in photoshop with some image regions placed over the navigation to give the impression of a clickable ‘wired-up’ prototype. The reality is that often prototypes are just the vehicle to help us prove a concept or elicit requirements. They are not the end goal, just a means to help you reach it, so we take them as far as we need to go – as far as they remain useful and then we stop and move into code. Your choice about whether you go with a low vs high-fidelity approach is likely to be based on a time/cost/benefit decision, and each project is likely to be different.

There is nothing wrong with low or medium fidelity prototypes. I’ve worked on a number of great projects where this level of prototyping has generated some real rewards for all parties involved. However, more recently I have been working with very high-fidelity prototypes in two new ways. They are:

  1. JAD sessions
  2. Task based user testing

1. JAD Sessions
The purpose of a JAD session is to dramatically reduce the timeframe required to complete a deliverable where consensus is required by getting all of the major decision-makers, stakeholders, and knowledge providers into one place all at the same time.

The real-time production of a prototype during a JAD can help stimulate the discussion, capture decisions and demonstrate ideas and concepts to non-technical stakeholders.

Questions like, “What fields do we want to be capturing during the signup process” are easily answered when a marketing representative is present along with your database administrator. Consensus is reached and the prototype is updated to reflect this detail. The designer gives his input on the layout and one of the developers is able to highlight a constraints of the existing technology stack used. And then you move on to the next decision.

The prototype acts as the living record. The organic deliverable that in the space of 2 or 3 days has grown to encompass all the requirements, decisions and actions.

2. Task based user testing

Task-based User Testing is all about learning how users interact with your application or website in order to access information and complete tasks. Nielsen states that the biggest improvements in user experience come from gathering usability data as early as possible. He estimates that it’s 100 times cheaper to make changes before any code has been written than to wait until after the implementation is complete [3]. With this in mind it’s important to get user feedback and research findings as early in the process as possible, waiting until the site or application is live can be costly and impracticable.

While the prototype is not the finished product it’s important that it is worked up to a very high fidelity so that users are comfortable with its look and feel and aren’t distracted by inconsistent elements or broken interactions. The prototype requires a high level of polish and a set of well crafted scenarios to test and journeys through the prototype.

Working a prototype up to the is degree of fidelity requires time and skill but it’s a worthwhile investment to ensure that you can capture user feedback in the design phase, before any code is written.

Conclusion

Prototyping at any level will give you a great return. Your decisions on how close to make it to the finished product will depend on your budget, your deadline and the influence you are trying to achieve with it.

[1] Farnum, Chris. 2009. “What you should know about prototypes for user testing.” [online]. Published on 29th July 2009. http://www.boxesandarrows.com
[2] “Usability First, Glossary”. Accessed on 11th July 2012. http://www.usabilityfirst.com/
[3] Nielsen, Jakob. 2003. “Paper Prototyping: Getting User Data Before You Code.” Published on 14th April 2003. http://www.useit.com

Understanding Business Rules and Data Requirements

Last week I was thinking about how to capture and analyse business rules and data requirements for a current project. I’ve worked on a number of projects where the focus can often be on either rules or data, however focusing on one while leaving the other out can cause issues further on in the project life cycle. I’ve learnt this on previous Business Intelligence/Data Warehousing projects where over analysis of the data can lead analysts to start to ask more questions of the data then the end-users or owners ever wanted to ask, while key rules are not enforced.

While doing some reading around the subject I came across Mary Gorman’s IIBA Spotlight Webinar on Business Rules and Data Requirements.

But firstly, what is a rule?

A declaration of policy or conditions that must be satisfied…. In order to define, stream or enable our system or process behaviour. (Gottesdiener 2005)

Mary discusses 4 types of rules that can aid a BA in categorising rules and their corresponding data dependencies. These 4 rules are:

  • Term Rules
  • Fact Rules
  • Constraint Rules
  • Derived or Calculated Rules

I am listing them below more for my own reference benefit, but hopefully others will find them useful.

Term Rules
Defining nouns such as “Customer” or “Gift Card” is important. This task is not just for your glossary but for giving context to your system or process. We’re able to identify actors (Customer) from terms based rules and this helps us define the scope or boundary of the work to be undertaken.

Fact Rules
Fact rules are a two way relationship between terms rules. For example:

  • A purchase is paid for by a gift card
  • A gift card pays for purchases

These can be extended to use cardinalities

  • A purchase may be paid for by one or more gift cards
  • A gift card may be paid for by one or more purchases

As we identify fact rules we can determine data attributes. For example:

  • A customer has an address. An address is made up of address lines, city and location.

Defining these data types allows us to define the scope of the rule. For example we don’t want a second address line.

Constraint Rules
Constrain rules can be used to constrain the action of a rule. In this example an expiration date is added to the gift card.

  • gift card expiration date must be equal to or greater than purchase date.

Derived or Calculated Rule
A calculated rule allows us to derive new attributes from existing ones. For example:

  • A gift card date is calculated from the date of activation plus 365 days.

Gottesdiener, Ellen, The Software Requirements Memory Jogger: A Pocket Guide to Help Software and Business Teams Develop and Manage Requirements. Goal/QPC, 2005

Stakeholder Interviews

This week I have been revisiting the topic of Stakeholder Interviews, in the hope to refresh my own thoughts on the subject. I’ve found some great resources that I would like to share along with a number of hints and tips that I have from my own experience of undertaking Stakeholder Interviews. I should note that this post is not intended to be a definitive list of potential questions, more a guide to those who wish to understand how to undertake Stakeholder Interviews.

Firstly, what is a Stakeholder Interview? Well – I couldn’t find a book definition to reference, so I have made one up:

An interview between a Business Analyst/Project Manager and pre-identified stakeholders of an organisation or department, for the purpose of understanding a current system and/or eliciting requirements for a new system.

Like many interviews that you will have taken part in, either in seeking employment or helping in market research, they follow what is known as a structured interview style.

Bryman & Bell (2003) describe the purpose of the structured business research interview as being “for the interviewer to elicit from the interviewee or respondent, as he or she is frequently called in survey research, all manner of information: interviewees’ own behavior or that of others, attitudes, norms, beliefs and values.

So now that we know what a Stakeholder Interview is, how do we undertake one? Thanks to Brad Botz over at BA Mastery, I came across a great podcast that he put together on the same topic. Brad suggests breaking down your questions in the following way:

Function
– How do you use this software? What are some examples of what you do regularly? Which actions do you perform the most?
Frequency
– How often do you perform these tasks?
Preferences
– What do you like best about the software?
Problems
– What problems do you run in to on a regular basis?
Expertise
– What are the tips and tricks that you have learnt? Are there any workarounds that you have discovered to make the tasks easier?
Miscellaneous
– Have I missed anything? Is there anything that you would like to show me?

It is important when to stay on topic with this last set of questions. Try not to allow the stakeholder to deviate from the original purpose of the interview and remain positive about the system, even if their own concerns or experiences are negative. It’s also to remember to let the stakeholder follow the course that they wish to take throughout the interview. While it is appropriate for the interviewer to help guide the respondent, encouraging them down a particular route is not appropriate and should be avoided where possible.

In Summary

Advantages:

  • If you build a rapport you are going to get much more acceptance of the project, not just through the interview but throughout the life cycle of the project
  • Its easy to conduct interviews – easy to extend them if they overrun and easy to record their findings

Disadvantage

  • It can be difficult to obtain consensus if there are many stakeholder
  • Follow up questions are heavily reliant on the interviewers experience of the interview process and their understanding of the domain they are working in
  • You may lead someone accidentally

Bryman, A. and Bell, E., 2003. Business Research Methods. New York: Open University Press.
Clash Of Clans Cheat And Hack Tool

Users, Roles, and Personas

The ability to understand a user and their role in a software system is not a new thing to many. We have all interacted with applications and software where there is, at minimum, a normal user and an administrator. Users can’t do all the things an administrator can do and an administrator can do all the things a normal user can do – plus some additional things.

However, when we look at systems in this way all we are really modelling is permissions. Who has permission to perform a task? Who is allowed to perform an action? If someone does something wrong who will moderate this? These are all valid questions about the systems and it’s intended Access Control List (ACL), but captures nothing about the individuals that will be using the system, what their goals are and what things we can do with the interface that will improve performance for the users.

I found myself in this position on a recent project and something I read caused me to rethink my approach. This is what I read:

User roles do not resemble real people nor are they intended to; roles are spartan abstractions narrowly focused on those aspects of the relationship most likely to be relevant to presentation and interaction design. Compared to personas, user roles are a more technical and formally structured model.

By contrast, personas are figurative models rather than abstract models, that is, they are constructed to resemble real users, even down to photos, background information, and personal history. Verisimilitude most likely contributes to the popularity of personas. They sound like people you could know, and over the course of a project can take on a reality that encourages empathy and facilitates thinking from the user perspective. What is more, many people find that the creative process of constructing personas to be engaging and energizing. Personas are fun. (Constantine 2005)

The suggestion here was that when modelling users for a system it wasn’t a case of picking one or the other, role models or personas, but that really the two needed to be carried out together in order to get the fullest picture of how a user would use the system and what they wanted to achieve from the system – not just what they would have permission to do.

I decided that it was time to put this into practice, to give it a go. Was I going to add the persona modelling to my toolkit or not? The system I was working on (and still am currently) is for managing learners in a learner centre. The scenario I am going to use as my example is quite simple: Learners attend the centre in the morning and are greeted and registered by a ‘learner administrator’. The learners then commence on a piece of learning and are assessed by a ‘tutor’.

In this scenario I identified two users a learner administrator and a tutor, the learner was not interacting directly with the system. However, due to resourcing issues at the centres it was important that learner administrators and tutors had the ability to perform the same actions. In essence the two users share one role.

So under the role based model this is fine, nice and simple. I don’t really need to create two user types I can just create one and add a label that just identifies their job title, in case I ever need to reference it somewhere. However, I’d decided to add personas and see if anything changed. Here are my personas:Watch Full Movie Online Streaming Online and Download

Learner Administrator

Sarah is a single mum in her early twenties. Her day consists of checking students in, chasing the ones that haven’t turned up and ensuring that the correct reports are run so that the centre can claim its allocated funding for the learners they have had through the door. Between admin tasks Sarah uses Facebook to see what others are up to and uses an IM client to chat to her boyfriend. She’s a confident computer user who finds the system easy to navigate.

Tutor

Phillipa is an ex-school teacher. She works as a tutor part-time more to keep her busy rather than because she is dependent on the income. Phillipa has older children who are no longer at home and her husband is a consultant surgeon. Her day-to-day tasks are the assessing of learners, recording their outcomes and helping them to move on to the next piece of learning. Phillipa is focused on the learner and finds the need to record everything on an online system a chore. She only uses about 25% of all the systems functionality but the areas that she does use are used relatively. Phillipa doesn’t mind using a computer for her tasks but likes to have peoples on hand to check with and ask questions when needed.

From this it became very obvious that my two users with the same permissions role were going to be using the system for very different purposes. I had been able to identify this early, rather than waiting for User Acceptance Testing (UAT) and I was able to do something about the design of the system and its interface.

The system has not been fully implemented yet but one suggestion that has come out of this was a configurable dashboard. The dashboard is made up of a number of blocks and pods that either display content or a forms and filters into other areas of the system. The layout of this dashboard would be dictated by the priority of certain day-to-day jobs, allowing for learner administrators and tutors to rearrange these pods to suite their needs. Also, based on the understanding that the tutors were generally older, less computer savvy users I opted to prioritise the default dashboard around their needs so that they would need to do less configuring, allowing the more confident learner administrators to customise their dashboards heavily.

So what did I learn from this exercise?

  • I learnt that an ACL role model isn’t always enough
  • Roles need context. Personas allow us to add context
  • You can create an interface that is different for two users but only charge them for one.
Constantine, Larry. 2005. Users, Roles, and Personas. [online]. Available from: http://www.foruse.com/articles/rolespersonas.pdf

2011: The Year of Professional Development

I decided this year that 2011 was the year of some professional development. Well, I decided at the end of last year and opted to renew my lapsed student BCS membership for 2011.

Having studied on a BCS approved degree in Manchester, I knew about the BCS’s work and acknowledged them as a group how act as a common forum for both academia and industry. It is the professional qualifications that I was particularly interested in, many in the areas of Business Analysis and Business Systems Development.

I liked the BCS’s definition of CPD, that focused on maintaining competencies:

Continuing Professional Development (CPD) is the organised continuation, improvement and broadening of knowledge, understanding and skills, as well as development of personal qualities necessary for an individual to maintain their competence in order to undertake their duties throughout their working life.

On graduation in 2009 I though I was set, that I had everything that I needed and knew all I needed to know for my career. I’ve since learnt that what I studied was both excellent and dated in equal measure. Some of the tools I studied are no longer in common use and others are no good unless adopted by the whole organisation. In short, it was time to do some more learning.

I hope that there will be some more posts to come on the topic of CPD this year as I look to keep the blog updated with lectures, courses and conferences I attend.

I’m also keen to get along to some of the BCS Manchester events this year. The weekly email I get from the branch is always packed full with some great events and opportunities to get along and meet more like minded professionals. No doubt more posts will follow on this.

Definition: Business Analyst

I thought I’d do some simple posts that just outline what I mean by certain terminologies. Some terminologies can get a little vague as we start to cross methodologies. The first on my list is

Business Analyst

.

A person who analyses the operations of a department in order to develop a solution to its problems.

I really like this definition. I’m sure there are those who will hit me over the head with definitions from the BCS or the IIBA, however I think its pretty good. I like it because firstly, its simple. Simple to understand amd simple to explain. I also like it because it doesn’t mention technology.

There are obviously variations on the BA role:

  • Systems Analysts
  • Process Analysts
  • etc..

While I have been known to undertake elements of these in my work, I’m proud to say I’m a Business Analyst.