Web was not centralized. For years [company confidential] had yet to provide a standard, funded, and centralized means to provide website services to their organization as a whole. The nature of their structure was such that individual areas, departments, groups, and stakeholders needed to maintain entire websites or collections of websites for their individual use. Areas with the resources (monetary and/or technical) had a bit more flexibility and option in how they managed web services. Areas that lacked adequate resources either went without or attempted to do the best they could. The result was a landscape of mixed quality, content, and technologies. Some people even opted to outsource some level of the responsibility to third-party providers, even the hosting, which meant that the enterprise-wide hosting wasn’t even standard.
This allowed for a range of issues, beyond the lack of any level of quality consistency. There was no overall voice that ran through with a coherent thread. Over time the neglect had led to an organically grown system of those that flourished and those that did without. It also meant that there was a lack of direction from the main branding department in how each area presented themselves. Eventually there were even departments that opted to drop their organizational branding, making their websites appear as if they had no affiliation with their parent org at all. In the end, could you blame them?
They were in effect, and from their own perspective, abandoned. Many of them on countless occasions had asked for help and received none.
However, there was also a multitude of ingenuity. Faced with limited resources and a lack of centralized assistance, areas had created some rather inventive solutions to problems. It was a goldmine of insight and real world case study. If any centralized solution could be successful, this was not to be ignored.
It was specifically requested that a centralized unit of service be created dedicated with the purpose of providing not only proper resources, but also a guiding voice of direction. I was hired to build that unit, flesh it out, and see it implemented. The concept was simple, an internal web agency, modified for the unique nature of [Company Confidential]. It’s primary task would be to create a platform from which it would provide those services.
Let’s Begin Exploring
I think it first helps to decide what it is I’m attempting to explore. For me the mechanic for distilling this down is to create a one sentence mission statement for the project. It doesn’t have to be perfect, but it gives me somewhere to start.
In this case, We began with this:
A website services platform, enterprise-wide, designed from all layers of the stack and all layers of the interactive experience.
And ended with this:
A platform to develop, host, and manage websites enterprise-wide.
It could be longer than that, and elaborate further, but that’s good. That’s the meat of it. Now we can start the Discovery process.
Discovery hinges on research and that research’s catalyst is always questions. Those questions come from a framework, a mapping system if you will. In this instance I came up with the following framework of priorities and needs.
Progression of Investigation & Discovery
This is good, this gets thing going, but this is not full of action. We’ve got to keep things in the active, the explorative. To do that we need to turn this into a framework of questioning instead.
Line of Questioning
The overall purpose of the line of questioning is to produce enough data to gather answers that can be used to crystallize the details of what will be undertaken. A base framework is created to begin with. It operates similar to a tree with a short level of branching by default. However, the framework is not all encompassing, it is merely a platform. Thus, as the process progress questions will be asked following the framework and then they will lead to new questions not listed in the planned line of questioning. This allows the Discovery to begin in a standardized way and maintain an overall sense of generality, but adapt and grow specifically to the lines of questioning that prove useful or interesting to the Discovery. Often there are questions which are needed to be asked, but are unknown in the beginning. This framework allows one to work their way organically into those unknown questions and by proxy into their related answers.
By the end of the Discovery, all of the framework questions will have been asked and answered, but not all of the questions asked will have come from the framework questions. While it is an over generalization, as a rule, if there is a low number of new questions it is often the case that the Discovery will not produce a viable Scope.
Questions are designed to avoid binary answers and instead require a more comprehensive analysis. This means the questions must provide a higher fidelity of answer than a simple “yes” or “no”. These are hardly useful. It is also possible that one question in the framework could take weeks to gather enough data to comfortably answer in a way that is useful to building the Scope.
For example, the question “Who are our Users?” sounds like an easy question to answer but is often not because the ‘obvious’ answer is generally misleading and missing a great degree of information. A client will generally list the people using the website, that may sometimes be referred to as customers. The issue with this is that if any employee of the company has to login to the website to edit it, they are also users, but the company did not view them as such. Thus, perspective bias and norms have to be dissected and removed to get an accurate answer to this question.
An even harder one is, “How are our Users divided and subdivided?”. Answering this question can largely depend on what kind of overall solution you are trying to provide. You want to segment people in a way that is useful to your strategy, avoid bias, and cut through internal perspective. For example, you can divide them by the complexity of needs that they have for the site, or the level of technical skills that they truly have vs. what it is perceived they have, or both. Getting that data requires a lot of conversations, a lot of questions, and a lot of data crunching. Often those answers will just lead to more questions, which makes for a healthy more robust Discovery.
Defining our Users
As we moved through the line of questioning in the Discovery the first major milestone is getting a proper sense of the Users. These were measured on a few different degrees.
First, users were broken down into who they were as an identity, related to their relationship with the Organization.
Second, those groups were broken down based on tense. For example, an existing employee and a prospective employee are not the same thing. If we just listed employee we would miss that deeper dimension of definition. For [company confidential] this was a critical distinction to make.
Next came an attempt at defining the User based on their needs and how those needs overlapped, divided by tense. This was mostly conducted through a series of sit down discussion sessions with standardized starting questions that expanded from there. The data was then compiled by me over a period of time where needs were compared between groups to determine similarity and overlap.
Over time I was able to create more ideal User need maps of where things needed to move to.
Then, I looked at users based on complexity of tasks that they would perform related to a website.
This allowed us to put people into two major groups.
FE and BE users had a lot of overlap in how they operated. They would be interacting mostly with the interface of the website on one side or the other.
Developers would have more deep layer access and needs and would be interacting directly with code in some way and seeking a more advanced set of features. For the time being they were set aside to be handled a bit differently.
When the FE & BE groups are added together with the other dimensions of user definition we get something that looks like this.
This lets us have a nice quick look at our map of users in these different dimensions in the FE & BE user space.
Next, it’s important to plot out some research related to technical skills for BE users. And that looked like this.
There’s no real point in doing this for FE users as we need to make something that is inherently useable and intuitive to people who are completely absent advanced technical skills. One does not need to have deep technical knowledge to use an iPad or read the NY Times, for example.
However, for BE users this might have been the most useful bit of data uncovered at this stage of the game. This was the first major discovery, but it wasn’t at all surprising. What it showed was two major groups overestimated their own and others technical ability in a big way. This began to show why for so long the ask for resources had not been taken seriously. Those deciding thought they new more than they did, and that those asking were capable of more than they were, a negative feedback loop.
Defining our Websites
With that data in hand we could begin looking through the needs these groups had, the current vs. ideal overlap and could begin defining website types. Since the end product of this entire platform is to host, create, and manage websites what kinds of sites are we handling?
That involved several more rounds of questioning, a lot of that boiled down to having folks expand upon their earlier needs answers and looking at what they had and how it worked and didn’t work.
In the end, eight website types were identified.
- The Club
- The Department
- The Project
- The Vanity
- The Resume
- The Alert
- The Portal
- The Database
The research at this point begins to compound and is put to use outlining the components that makeup the functionality of each of these site types. What are their feature sets? What are they providing to the FE user? What special things to they provide to the BE user?
This grows into a complex multi-dimensional set of data that has a lot of intersection points. There’s a lot of murder wall whiteboarding at this point. But once refined it looks like this:
- Content List
- Image Gallery
- Events list
- News List
- Blog List
- Link List
- Social Media Feed
- People List
- DB List
- Q/A List
- Pull Out/Call Out/Highlight
- Text Area
- Block Quote
- 25/25/25/25 (repeatable)
- 33/33/33 (repeatable
- 50/50 (repeatable)
These are the base-layer components that groups can use for their websites to build out their individualized content. Not every group needs every component or layout, but this is the minimum amount of options needs to meet the most needs without an unmanageable feature-set.
Defining our CMS
We can now combine all the User research, the website research spawned from that, and the technical research that has been ongoing in a general sense and begin my second murder wall to define what our CMS needs to look like.
That gets broken down into two lists.
- Custom Content Types
- Custom Fields
- Sane WYSIWYG (at a minimum for text/image placement)
- Sane Query LOOPing
- Media Management
- Parent/Child theme relationships
- LAMP Stack
- Open Source
- Community External
- Community Internal
- Low overhead
- User Friendly OR adaptable to be
- Non-static w/ DB
- Sane Theme Template Cascade:
- per page
- per content type
- per section type
- User Role Based Permissions Control
- Update Schedule (Security, bugs, etc.)
- Themeable/Configurable BE UI
CMS Nice To Haves
- Deviations from CORE can be stored in:
- Child Theme
- Support for third-party scanning/security
- Commandline Support
- Easy Local Dev Setup
- Themeable Setup/Blueprintable
- Agnostic OR adaptable to be
- “Dumb” Theme (minimal base files and config needed)
- Multisite Capable
Finding our CMS
Once we had our requirements there was no need to do a broad comparative analysis of CMS vs CMS. We merely had to find CMSs that met our criteria in one of the following ways. Then we could sketch them out based on each level of requirement match.
Present - this requirement currently exists within the CMS as first-class implementation
Roadmap - this requirement is on the roadmap to be added within our overall timetable
Extendable - this requirement is not present, but could be added by ourselves AND the system supports its addition.
The reflex would be to create a table similar to the above, to score things in a simplistic way. This is really just a modified pros and cons list. But this doesn’t allow for a greater deal of dimensionality to compare these. The goal here is not to create a neat little table that will look good in a slide or a quick presentation. Instead, we want our research to be useful in a practical and information sense. We want to answer questions and allow for the nuance of the situation. We don’t just want binary possibilities. We can say a Yes or No to “Custom Content Types”, but what if one of them has a more user-friendly implementation? What if one loads better? What if one is more extendable? Instead of trying to expand our scorecard to accommodate more parameters it is far more useful to denote in long-form research the finding around each Requirement and how they operate within their context AND in relation to the end-user (whomever that may be).
Already a narrative is beginning to form in our research. We can begin walking ourselves and our stakeholders through not only the process, but the findings of that process. We can even project forward a bit and see how the story would play out with each of these CMSs on a larger scale. This means we are on the right track. It is not enough to draw conclusions, we have to begin forming a narrative. The narrative is the structure that will hold this entire project and process together in the end.
WordPress As the CMS
WordPress won out for a lot of reasons, but a big one was that a lot of departments had already come up with inventive implementations of WordPress on their own. This meant a rich community to draw from and one that could be invested in rather than trying to start a brand new direction from scratch.
Change is hard. Change in a large organization that has a lot of old habits is very hard. This allowed a pathway that was evolution and not revolution.
Once the decision was made we dug deeper into the areas that had made it work already to meet their need to whatever degree. The largest area was actually nearly 40% of the entire org sites environment. We learned a few key details here that really informed next steps:
They had went with off-site third-party hosting with Media Temple, not because they saw the in-org hosting insufficient technologically, but because the level of server access they would need to adequately maintain their setup was not a level of access that the internal IT department was comfortable with. Deeper research showed this to be less a matter of official policy and more a matter of Sys Admin personal preference. The IT department in general was not well thought of. It was considered to be combative, less than helpful, and illogically stubborn.
WordPress sites at scale on their setup were individual sites setup and managed collectively with a hosting management system (cPanel). This was the big point of dispute that had led to external hosting. Internal IT did not believe in hosting management systems and did not have the resources to keep one running and updated. The trouble is that at scale you can’t reasonably upgrade sites individually and so there was a tool specific to cPanel that allowed collective updates and actions. They setup a base site and handed over credentials for site builds. They did not do any custom build or configuration and then it was up to the client to effectively “build” out the site. Essentially they operated less like an agency and more like a stripped down WordPress.com.
Everyone, with rare exception was given the same basic template. To launch new sites they still had to go through IT, which had authoritative DNS control. This was also a point of tension. They felt that they should be able to add in their own DNS to speed up launches instead of putting in a ticket that could take anywhere from 24hrs to two weeks to be resolved. IT’s position on this was that it was a matter of security policy to not allow such access. Policy showed this to not be true, but instead that it was more a matter of there was no feasible and secure way to hand out access to anyone outside of IT that couldn’t have widespread repercussions for other users. This was largely due to the fact that every site in the org was a subdomain.
Some kind of internal issue tracking system was a must to handle the influx of client questions, issues, bug reports, etc. Email alone would not suffice here.
We spoke to a few other areas, a few on-site and a few off-site with their setups and quickly learned that the roadblock in many places had something to do with the IT department. We also learned that the IT department was not overly fond of CMSs.
Clarity in the Platform
At this point in the process we had mounds of research on everything from the user spectrum to the possibilities and needs of a WordPress setup, but before we proceeded to dig into defining what a Platform would look like there were three major issues we needed to get to the bottom of.
Our research had begun to indicate that those in higher level decision making positions had no accurate view of the technical skills of the org and therefore set unrealistic expectations in relation to everything from job duties to resource approval.
We had begun to see a trend of the IT department being a roadblock in our ability to set up an environment.
ITs definition of what it did vs. what others thought/expected them to do did not match. Website service areas definition of what they did vs. what IT or upper level Admin thought/expected them to do did not match either. We were starting to see a gap in web service responsibility created from a “we said, they said” situation.
Changing the hearts and minds of a great degree of powers in a very large organization starts to fall a little outside the scope of this project. However, without finding a way around or through that problem then my scope didn’t matter.
The strategy here was that in some way these three issues were the result of just two things, a lot of ignorance and a lack of empathy. To continue that needed to be resolved, at least in part.
I started with a series of classes/presentations outlining what a website really was. The starting goal was simple.
Illustrate a definition of “website” that we could all point to as a singular, shared point of reference.
In many ways this was handled like an internal marketing and advocacy campaign. The key here for me was to show as much without saying, which is what led me to begin making website infrastructure diagrams.
These were by no means perfect, they were meant as talking points and question starters. But seeing all the connecting points all these elements that must come together in a reasonable way just to get a website to show up when you type in a URL began.
The next big part of education was to get everyone on the same page about where levels of responsibility currently lay in regards to maintaining a website.
I drafted a simple table document that outlined who was actually responsible for what and all the areas of responsibility that had none watching them.
This document, one page, and by no means a graphic masterpiece, was probably the singular most important diagram created for this entire project. It turned the direction of conversation after conversation. It lit up the eyes of tech-illiterate folks with understanding. It even traveled on its own into meetings I wasn’t even a part of for folks to help explain to their people where these misunderstandings were.
I followed that one up with one that showed us filling in the gaps in service. That is what got folks to start really buying into what we were doing. They could feel a need for something to be fixed. Now they were starting to see what was broken and that we were starting to fix it.
The result of these two diagrams and the conversations that came with them ended up being a majorly successful education campaign, but the thing it did most was foster a sense of understanding that led to a relationship of trust.
Once I started to gain some trust I was able to begin being an advocate and mediator for all the groups at play. IT began to understand the frustration of web areas. Web areas began to understand the limitations and reasonings of IT.
The final turning point meeting was with IT, where a diagram outlining our desired server setup and responsibility definition of what we wanted to do on that server helped them understand what we were after.
They really didn’t understand what it was that we were going to need and this diagram helped clear that up. Once they saw it they were super willing to cooperate on trying to find a way to help us setup this platform on in-org hosting.
This was important for a few reasons. We had limited resources to pull this off and web space is already provided in a big way by a separate budget. This meant we could have enough scale to build what we needed from a hosting perspective without needing to ask for any money. They already supported, secured, and managed the hosting. It was a big win.
We now had a place for it all to go, now we just needed to fully outline the whole of the Platform.
Defining the Platform Tech
If we look back at our line of questioning you can begin to see how this part of the Discovery is where things start to get a bit more complex. I’ve got to answer a lot of questions and really define this thing out, but lucky for me I’ve already got a mountain of data and a lot of new relationships to draw from to start giving the Platform a clear vision.
This was also the part where I really started to lean heavily on outside agency models and processes, industry-wide standards, best practices, etc. I found things that would work with what we needed to accomplish and began to draft out the basics of setup and workflow.
Each site would be setup as a child theme. There would be one master parent theme that pulled in the org-wide branding and standards and all basic features/components. Then any individuality and character was built on top of that system into the child theme. This meant we could add core features by updating just one parent theme that would cascade downward.
The HTML, CSS, and JS structure would be as modular as possible so that new features could be easily added over time.
Git was chosen as the version control option. No one in the org at this point used version control in a major way. There were a few SVN projects, but Git had a bit community and seemed to work exactly the way we needed it to.
The goal for the beginning was to let Git be the record of change, a rollback tool, and also a means of deployment (manually) for putting up new projects or upgrading the platform in major ways.
Bitbucket did the heavy lifting of hosting repos. The parent theme got a repo and the child themes each got their own repos.
Standards guides began being drafted to outline documentation, syntax, security, etc.
A number of premium plugins were identified to handle some key features that WordPress either did not do natively or did not do well enough. It was a priority to avoid building out custom plugins wherever possible. Each time that had to happen meant another thing to maintain and resources were limited. The biggest thing here was the selection of the Advanced Custom Fields plugin. Its ability to add fields with a handy GUI, but to seamlessly integrate them into the existing backend UI meant for a smooth extensibility. And it was also extendable on its own so we could build out things if we hit a ceiling with it.
As all this began pooling together the need began to form for a means by which to quickly setup new projects to build out. That needed to cover three things.
The dev environment to work in (or a subset of that) The site to begin modifying as an application and collection of settings, etc. The child theme to begin modifying
To accomplish this I used a site cloning plugin that would clone out a pre-built site that had all we needed. This did not cover the child theme itself, Git handled that just fine. And lastly the dev environment was setup within MAMP to begin with, and adding a new dev space for a new site was just a matter of few copy-and-pastes and a vhost edit. So far so good.
For Git change tracking we just did what 37 Signals(now just Basecamp) did which amounted to branches are changes and master is always deployable. Any merge needed two sign-offs and you commit often and descriptively.
With local stuff running MAMP the requirements of the LAMP stack on staging and production were very standard and well documented by WordPress. The IT area had already hosted a number of individual WordPress sites for folks so they knew what they were doing.
The relationship that we ultimately built with IT led to a discussion around Web Host Managers. We wanted to drop in cPanel to use for a collective manage solution to keep all the sites updated. The issue here is that IT had no resources to support it and there was no approved license to allow a third-party vendor to do so. They compromised and said they could setup Plesk. We ran lengthy trials on it and found it to just not do what we needed.
In the end we collectively decided that we’d be given a bit more systems level access to manage the sites effectively, we would not use a Web Host Manager at all, and to solve group updating we’d take advantage of WordPress’s multisite feature. All that relationship building really paid off here because IT was very nervous about a singular point of failure. We talked it out and they ultimately agreed that it was best and that they would allow it on the environment, but that we would be responsible for the application layer.
Throughout the earlier stages of the Discovery process we had spent a lot of time with Users, but in this phase the majority of that data collected helped inform a lot of discussions with IT on how best to go about getting the platform up and keeping it properly supported. I cannot stress enough how the relationships of trust built to this point played a role in getting this squared away.
Defining the Platform Process
Process is the thing that can make or break a team and that can be the success or failure of a project, especially one of this size.
The key question here was:
What will the Process for building individual sites look like on this Platform?
This wasn’t my first rodeo and I knew the ways in which a website went from conception to production, but I still needed to explore it here. I looked through the data I had, worked through how the agency aspect of this endeavour needed to operate, and tried to come up with something general enough, but descriptive enough to define process. This gave us a unified way in which to plan, project, and scope work.
Project Management, effective project management is something I could go into at length. However, for the sake of this I’ll try to keep it relevant in the higher sense.
In its simplest forms Project Management is the tracking of three things
The important bit here was defining what would be considered an Event, a Task, and a Project within the realm of a site build.
Our earlier definition of process really helps set this right. If the point of Build is to produce a defined state, then a project is the conception to production of a defined state. This is an important delineation to make. A website was not a project, but instead a releasable defined state was a project. This treated major state changes later as new projects. This was done to safeguard against project bloat and to make sure that there’s a point at which enough changes have to trigger a new Discovery and a whole new process rather than continuing to operate on old research and assumptions.
Events are any major dateable item that occurs attached to the project. These mostly included, meetings and milestones/checkpoints.
Tasks were the big deal here. Tasks needed to be trackable, but also they needed to be projected with estimates. A project needed to know how long a task would take, because that’s how eventually you know how long the whole Build tasks and eventually the entire timeline of the project.
It’s popular to divide tasks up into features, but for this the decision was made to divide things into segments. Segments in their simplest form are something that can be worked on by one assignee and take two hours or less to accomplish. That’s it. They are broken down into two types from there: those that are consecutive, and those that are detached. Consecutive segments must be worked on in a specific order. Detached segments can be done at any part of the build phases.
We could then make a list of the segments, built as tasks for the website build. There would be a level of individuality of project, but each Site Type would have 80% of their segments be the same as other sites of that type. This meant we had a template of segments to get a project setup in the Project Management System. Then, we would modify that list as we went through Discovery and defined scope. Keeping the tasks clocking in at under two hours also means we'd have things that go way under that and some that go right up to. If you forget to build in adequate buffer room that rule has already put some in for you.
Spreadsheets can be useful in project management, but a good tenant is that tracking be as passive and accessible as possible. That meant finding a proper Project Management System.
As a personal aside here I’ve long had a bit of an “interest” in Project Management Systems and so even going into this project I had used a lot of them. That combined with the research at this stage let me quickly get to a proper solution for what we were going for.
Podio was chosen because it was very flexible in how it could be set up and allowed for tracking the way we needed it to. Others got very close, but fell short on just a few important things, one was a way to track support internally without major bloat.
Then came the time to take all the process to distill it down into a structure that could be created within Podio. The only thing Podio would not track would be Git. The status of a project was what we needed to know, not the number of commits. It was not information that needed to be readily available to the wider reach of the project.
But this only solved half of it. Once a project went up it needed to be supported until it reached an End of Life point. That meant we had to build in the ability to track and manage support within the system, something that Podio let us easily do. Support also needed to be defined in a way that was trackable. Random support messages were not useful for future platform changes and iterations. A system that tracked useful data was one that helped itself grow.
Loosely, support was anything that modified or corrected project deployed state. At a point enough state work meant that a new project must be spawned. That threshold was set to be loose and defined as a “I’ll know it when I see it situation”.
The final piece to put together was outlining the process and procedure for EOLing a website. That didn’t take a lot of research, it’s mostly straight-forward, but I did look into how it was done prior and it was not very future friendly. Generally, things were just deleted when they were “End of lifed”, this was very bad for content we might need down the road and for archiving or record keeping. Other times the project sat on a server somewhere like a ghost ship and no one ever touched it again. This produced an array of security issues on multiple occasions. Research here was not deep, but very valuable in wrapping the entirety of process.
Constructing a Timeline
There’s a good sense of what all needs to happen and now the time comes for the order it all needs to play out in. This is a fine time to see what things overlap and if we possibly missed something along the way.
There was nothing glaring that stood out, but there were several items that had to be circled back on. A few meetings were set and discussions had around different groups’ yearly timelines so that we could work out a way to get everything done without major parts stalling waiting for feedback or input from a key stakeholder.
When that was finally settled a timeline was in place and a scope was set. Build work could begin.
The first part of build was spending a lot of time setting up the full breadth of all that had to be done. Discovery had done a lot of this formerly for us, but since this is a platform that holds everything there is no Project Management System yet, there is no tracking, there is no list of standards. The processes are not yet in place. The issue that we had overlooked was one of meta. How do you plan the project that sets up the way you will plan projects? It’s a bit of a chicken and egg problem. So, we put Build on pause and made a scaffold of sorts. This was a one-off thing and I would be the person that needed to know where things were. For stakeholders I would be the point of report so everything funneled through me anyway.
That’s where mega whiteboard murderwall came in. We literally just mapped out this part on a massive whiteboard.
And then, finally, Build began.
The Product of All That Time
Given enough time that massive document of scope began to come to life. It wasn’t trapped in diagrams, lists, and tables anymore. You could really start to see it. Working through the project was an exercise in refinement and systems design.
Things went off the rails a few times and we had to pause and really evaluate what was going on, but the well defined scope that all that research went into really helped ground those decisions and keep things on a coherent pathway.
In the end the product launched with a beta group first, closed and then expanded. It ran through a number of tests before and after hitting production and in a matter of months had a few dozen sites running smoothly on it that had went through the now built process from start to finish.
Problems and best laid plans, time to iterate
You can plan, and research and prepare to the nth degree (and you should), but there is no substitute for what you learn from a launched product. I never expected to hit all the major points adequately on something this large, with this many stakeholders out of the gate. But now I had something to tweak AND something to pull data points from. Now we were cooking with gas.
Major issues we identified were:
- Podio’s ability to customize was very powerful, but also a lot to manage. You’d spend almost as much time managing Podio as you would the work inside of it.
- Podio handled tracking projects great, but failed miserably at tracking the life cycle of a Task.
- There was a lot of information that we had tracked (and that Podio had been chosen because it could track it) that turned out to be interesting, but not useful.
- Podio was chosen because it could help pull out data to make reports to leadership, but leadership didn’t want the reports it turned out, they just wanted high level status
- Phases and stages were too detailed for projects and not generalized enough
- MAMP while useful was unpredictably buggy at times and keeping all the dev setups in sync was a lot of trouble
- IT had indicated they would be providing a higher level of security than they were and therefore we ended up allocating resources to cover that hole.
- We had not formalized User training for site managers. A rough document existed and a one time session on hand-off occurred, but more was obviously needed.
- Child themes as a means of individualization was a good idea but several of the ways we were using it to accomplish that could instead be built into the admin backend and saved as a setting in the DB instead of needing to write bespoke code.
- The fidelity of forms that Users would ultimately request was far greater than we had expected.
- We had not planned for a larger scale analytics approach. We had set them up, but not with the expectation of granting layers of access to various areas, etc.
We broke these down ran through a mini Discovery, gathered findings, formed solutions and then set to building them.
- Podio was scrapped entirely and we migrated everything to Trello. It allowed less oversight and management of the system. It also allowed finer detail of tracking the lifecycle of a Task. Project status because a tad less intuitive, but it was a worthwhile tradeoff.
- Since Trello itself was very simple we tracked less, and had just the data we needed. This sped up our build process because there was less to report or input.
- We simplified the phases and stages to be more general, especially since Trello forced us to.
- Tickets also got more simple and integrated more seamlessly into the weekly workflow.
- MAMP was ditched for Docker so we could have a standardized dev environment with more control.
- A great deal of features previously coded into child themes were migrated into settings panels in the Admin of parent core. This meant less major deviation, but didn’t sacrifice individualization.
- A new project was created to explore finding and/or building a forms system
- A strategic plan for handling and granting access to analytics was drafted.
- We had a discussion with IT where we outlined some security holes and worked out a compromise on that responsibility.
Later the following Spring, official version 1.0 launched ready to become the centralized unit of web services for the org at large.
Nearly three years from the date of launch a series of leadership changes, vendor decisions, and resource re-allocation shut the platform down. There were no discussions with web areas. There was no research. No data was gathered to support these decisions. There was merely one decision made by someone in leadership with a relationship with an outside corporate vendor that ultimately led to the end. It happens.
I’m proud of the work we did, and for a time it helped a lot of people. It had begun to bring a sense of unity and community to a very divided org. There was still a lot of work to do and a lot of iteration to be done, but you do your best. Nothing lasts.