Author: Jerry Jones

  • Lessons from Hiring Software Engineers

    Lessons from Hiring Software Engineers

    I’ve been a jack of all trades web developer for nearly 15 years and moved to working full-time in hiring in September 2020. At Automattic, we’re hiring as fast as we can. This has given me a chance to work with and evaluate hundreds of software engineers over the past eight months.

    The Day-to-Day: A Bit of Context

    As a full-time hirer, my first responsibility is to kindly assess, mentor, and guide candidates through our hiring process to determine if they would be successful at Automattic. On the surface, it sounds like all I do is run interviews, grade code tests, and guide trials all day. 

    While that may be my primary responsibilities, it comes across less as “grading” and more as:

    • Digging into the why behind someone’s code.
    • Becoming a PR Review expert.
    • Delivering kind, constructive feedback.
    • Implementing new ideas on how to accurately and more quickly evaluate candidates. 
    • Formalizing/automating processes to make hiring more efficient.
    • Working on interesting technical challenges to improve our hiring toolkit.
    • Improving my own problem solving and designing abstractions skills by challenging my assumptions of how code can and should be written.
    • Changing someone’s life by letting them know they got the job. 🎉

    The Human Side of Engineering

    Most engineering roles at Automattic are focused on coding in order to help people. In hiring, it’s a bit more meta: we are focused on identifying the best people to write code in order to help people. 

    Engineering hiring is an interdisciplinary role that allows you to think deeply about how to best evaluate and mentor your candidates. Even when a candidate might not be ready to be an Automattician today, most are eager, kind, and smart. Our goal is to help them learn through useful feedback and mentorship on how they can grow to be successful their next time around.

    If you’re like me and have a hard time with giving direct feedback, then working in hiring is also a huge opportunity to grow as a person. I’ve learned more about how to confidently give effective, constructive, useful feedback in the last eight months than I have in my whole life.

    Giving Effective Feedback

    This should be its own post to dive into the details. Here are a few quick lessons learned:

    • Be positive, and focus on their ability to grow to meet the challenge.
    • Be direct. If the feedback isn’t clear and actionable, you can’t be sure it was understood. And if you can’t be sure it was understood, you didn’t really give any feedback.
    • Give feedback continuously. Don’t save it all for one big review at the end. The “official” feedback shouldn’t be a surprise – they should already have heard it before. This also allows people a chance to correct problematic areas before they become a bigger problem. It also shows you how they are at receiving and addressing feedback.
    • Set clear expectations. What does successfully addressing the feedback look like? How will they know they’re doing it better now?

    What Makes a Good Developer?

    This is something I’ve spent a lot of time thinking about recently. I think it’s different for each organization. For example, you can be a really technically skilled engineer and not get a job at Automattic. And you can be a great engineer for Automattic, and not be able to get a job at another big organization. I don’t think I have the technical skills to pass more algorithmic-focused code tests, but I’m a great match for the kinds of problems that need solved at Automattic.

    So, more accurately, the question should be, “What makes a good developer at [insert organization name here].”

    For us, a brief list looks like:

    • High self-motivation and drive
    • Technical ability
    • Fast learner
    • Great written communication
    • Can adapt and solve problems from many different areas, even if they’re not familiar with the language or codebase
    • Good self-management and dependability
    • Can give and receive feedback well, allowing for a lot of positive growth

    There’s plenty more, but the main takeaway here is that technical ability is one of many pieces. It’s OK to not have the strongest technical ability as long as you have many of the others and, especially, if you can learn whatever you need to learn.

    How do you Evaluate that?

    Having a list of things we’re looking for is one thing, but how do you fairly and accurately determine someone has those qualities?

    This is much harder to do than it seems, and relying on your own intuition of, “I know it when I see it,” is a great way to have a super-biased, toxic, homogenous company.

    I wrote about Qualities of Expertise earlier this year as part of my learning around evaluating expertise. It’s hard, as the problems that require expertise to solve are open-ended, contextual, and don’t have a clear right/wrong answer.

    The best way we’ve found to evaluate this is to have a 25 – 40 hour, paid trial where we give candidates a close-to-real-world codebase with some basic direction and let them solve it however they feel is best. It’s like a paid-mentorship and code camp where you might get a job offer in the end.

    The great thing about the trial is that there are no right answers. There are multiple, valid approaches to everything. It’s less about the direction you choose, and more about:

    • How you arrived at your decision.
    • How you communicate your decision.
    • How you solve/implement it.
    • Does it work? Does it work well? How do you know?

    There’s no accidentally passing the trial. If you receive an offer, it’s because you’ve been thoroughly vetted and deserve the offer.


    The fast-paced learning and self-improvement is keeping me highly engaged in Engineering hiring. My next big step is learning more data and rubric development. I doubt I’ll run out of things to learn anytime soon.

    If you’re a software developer looking to grow in these areas and have the opportunity to work in hiring, I highly recommend it.

    Thanks to Thuy CopelandDerek Springer, and Boro Sitnikovski for edits and feedback.

  • Tips for Text-based Interviews

    Tips for Text-based Interviews

    Since joining the hiring team at Automattic in the fall of 2019, I’ve noticed different patterns and preferences on text-based interviews. Some of these are also general interviewing tips.

    Send shorter messages

    The pacing of the conversation is improved when you send multiple shorter messages instead of one multi-paragraph message. Answer as you go along. This gives more opportunity for your interviewer to drop pertinent emoji reactions and give feedback/redirect along the way.

    Shorter messages = more engagement and faster feedback.

    If someone starts down the wrong path on a question and answers once every five minutes in a wall of text, then the interviewer isn’t given an opportunity to redirect early on. The interviewee may unintentionally be cutting their interview short since they spent so long answering the wrong question. Also, it’s not a very engaging conversation if you only see a message once every five minutes.

    Tip: It’s OK to type fast and correct typos/edit for clarity as you go.

    Avoid Threads if possible

    This is something I prefer. I watch that “_ is typing,” alert like a hawk. This is your cue to know if you’ll be interrupting someone or not.

    It’s really helpful to know if you should wait for them to finish sending a message or if you should jump in with another message. In Slack, this alert only shows up if you’re typing in the main channel. The threads don’t show it, so you’re not sure if someone is typing.

    Of course, there are some small comments that are better in threads. That’s fine. Keep threaded messages short if you need to use them. If you need to go back and discuss something, instead of using a thread, you can use a block quote for context:

    Quoted text you want to talk about.

    Can you clarify this comment? I’m not sure I understand what you’re asking.

    In async conversations where you don’t need to know if someone is typing (most everything except interviews), then I 100% prefer threads. It’s all about context.

    Show your thought process

    So much of our interview and hiring process is about how you approach a problem rather than the final answer. Do you need more context to make a good decision or provide a good answer? Ask for it! Don’t make an assumption about it.

    In the end, a lucky right answer without showing your thought process isn’t worth as much as a solid thought process that arrives at a different answer.

    Don’t bother name dropping

    We don’t have a checklist of frameworks or languages we want you to mention. It’s about how you solve and approach problems. If you demonstrate a strong learning ability, then we’re not worried about if you’ve worked with x, y, or z, since we know you can learn whatever you need to in order to get the job done.

    Tell the story

    If we ask a question like, “Tell me about a time when…,” act like you’re telling someone a story about it, because you are! Give context:

    • What was the project?
    • Who worked on it? What was your role?
    • Why did you solve the problem the way you solved it? Did you consider alternatives?
    • How did you know you were successful? Specific metrics are 🔥

    In the end, it should sound like a concise story. This allows the interviewer to understand the issue and context so much more than if you only focus on the “What.”

    Also, be prepared to answer follow-up questions related to your story. For example, that specific metric you mentioned earlier — how did you measure that?

    It’s not that different

    A few little etiquettes are different, but, in the end, the same rules apply to in-person interviews as text-only interviews. Most people have never been interviewed over text, and most people do great with it. If you practice telling your stories and can demonstrate a pattern of success, you’ll be fine, text-based or in-person. 

    Thanks to Thuy Copeland, Josh Betz, and Enej Bajgoric, my fellow interviewers at Automattic, for reviewing this post 🙌🏻

  • Incremental Simplicity

    I just finished reading Code Simplicity, and, as someone who has a tendency for perfectionism, one thing that stood out to me was the idea of not worrying about building something perfect.

    It’s OK to not aim for perfection on version one. Or any version. You don’t even know what perfection looks like at when first starting a project. Instead, aim for a simple solution to your current problem, not a perfect solution to a future problem that may never exist.

    If you keep striving for simplicity with each new addition, your system will gain as much complexity as it needs, while still maintaining enough flexibility and simplicity for the next addition.

  • Qualities of Expertise

    Gaining expertise is difficult enough, but how can you tell if yourself (or someone else) has reached expert status? Unfortunately, multiple-choice tests are ineffective predictors since the benefit of expertise is being able to conjure up complex, situational-dependent solutions where there is no “correct” answer.

    One of the simpler ideas is to see how long someone has been in an industry (say, 10+ years) and give them the benefit of the doubt. Surprisingly, for programmers at least, “programming experience gained in industry does not appear to have any effect whatsoever on quality and productivity.”1 Ironically, what does increase with experience is the confidence in your incorrect decisions.2

    If you have the same one year of experience, ten years in a row, is that nine years of experience?3 Putting the time in is not necessarily effective. You need a strong feedback loop that allows you to hone your decision making and see what is effective,4 and an opportunity to vary and increase the difficulty and scope of your work.

    To find expertise, we’ll need to look beyond the CV, using some generalized qualities across different domains.

    Discrimination of subtle differences

    The ability to make fine discriminations between similar, but not equivalent, cases is a defining skill of experts.

    Empirical evaluation of the effects of experience on code quality and programmer productivity: an exploratory study

    One of the most chill shows I watch is The Repair Shop. It’s a team of repair experts, taking worn down heirloom objects and returning them to their former glory. In one episode, art conservation expert Lucia Sclaisi is shown restoring a damaged painting. Where I see an old painting with a hole in it, Sclaisi sees:

    • the time period
    • the likely artist
    • the style
    • what has sullied the surface based on the color of the grime (she identified it as nicotine)
    • how highly finished it is and what finish is used (and thus how to best clean it)
    • and surely much more they didn’t show/I missed.

    An expert can get information from clues that the novice didn’t even realize existed.

    However, it’s not simply about knowing the information. If a quick Google search can tell us the answer, then it doesn’t require expertise. The big piece is knowing how to apply the information to make successful, situational-dependent decisions.

    Consistency

    We all get lucky from time to time. Sometimes when the problem is a nail, we happen to have a hammer. Being able to consistently formulate a good solution to a variety of problems is the key here.

    Situational-Dependent Solutions

    I’ve mentioned “situational-dependent” several times now. What does that even mean?

    As Andy Hunt emphasizes in Pragmatic Thinking & Learning: context is key.5 The same problem in a different context may require a different solution. Problems rarely have a one-size-fits-all solution. It requires expertise to correctly apply information within the right context.6

    This means when encountering a new problem, an expert often asks important questions about the context. For example, the art restoration expert may ask where the painting has been stored, under what conditions, etc. I have a guess as to how some of these details could impact a painting, but I’m not sure how I could practically use that information. An expert does.

    These details, which may seem unimportant to the novice, reveal nuances that can significantly inform the expert’s solution.

    Assessing Expertise

    Accurately assessing these qualities when hiring is easier said than done, and likely looks different across industries. However, the key qualities of expertise remain:

    • Ability to discriminate between subtle differences
    • Consistency
    • Situational-dependent solutions in the right context

    If you are trying to assess expertise in an area that you don’t have much domain knowledge, Tyler Alterman has some great recommendations in their article, Why and how to assess expertise.

    In many of my examples it’s easy to see the difference between a novice and an expert. Assessing someone proficient in a subject vs an expert can get tricky. In the end, if someone can consistently discriminate subtle differences in context-dependent situations to find the best solution, they’re well on their way to expertise.

    Sources

    1. Oscar Dieste, Alejandrina M. Aranda, Fernando Uyaguari, Burak Turhan, Ayse Tosun, Davide Fucci, Markku Oivo & Natalia Juristo. Empirical evaluation of the effects of experience on code quality and programmer productivity: an exploratory study. Empirical Software Engineering. Feb 2017.
    2. James Shanteaua, David J. Weiss, Rickey P. Thomas, Julia C. Pounds. Performance-based assessment of expertise: How to decide if someone is an expert or not. European Journal of Operational Research. Apr 2000. p.254
    3. Andy Hunt. Pragmatic Thinking and Learning: Refactor Your Wetware. Sept 2008. p.15
    4. Tyler Alterman. Why and how to assess expertise. Effective Altruism Forum. Feb 2016.
    5. Andy Hunt. Pragmatic Thinking and Learning: Refactor Your Wetware. Sept 2008. p.35
    6. Iris Vessey. Expertise in Debugging Computer Programs: An Analysis of the Content of Verbal Protocols. IEEE Transactions on Systems, Man, and Cybernetics. Sept 1986.
  • It’s OK to Move On

    It’s OK to Move On

    As a joke (with some seriousness), I bought a glass-like impossible puzzle. It’s made of clear acrylic, has eight corners for extra trickiness, and it’s impossible to see which side is “up.” My wife is much better at puzzles than I am, and I thought this might be finally be her match.

    After an hour or two of combined work, we had 15 pieces together. It’s hard to even look at it for too long – your eyes don’t know how to focus on the edges and the pieces become a double-vision ghost of themselves.

    My best success came from sorting pieces by the way the angles of the edges slanted. It wasn’t very fun to methodically go through each piece, and only mildly satisfying when two pieces snapped into place.

    My wife gave in. Even with zero consequences for abandoning, I had a strong urge to push through to the end. It’s a problem to be solved. This drive to solve whatever problem is in front of me makes me a good programmer, but at times like this, the benefit is questionable.

    What problems need solving, and which are we working on simply because they’re here in front of us? Which should we allow ourselves, guilt-free, to abandon?

    Doing something like a puzzle, that @ghosthoney on TikTok described as, “too much work for a wrinkly version of an image I don’t really care about1,” isn’t inherently worthwhile. If you enjoy it or get satisfaction from it, go for it! But if it feels like a chore, then it’s OK to skip it. Leave the feeling of being a chore for things that are actually chores.

    Starting doesn’t mean I need to finish. My friend Jonathan Vieker talks about how it’s better to stop reading a book once you understand the point, rather than slogging through to the end: “we’re best off using that time to read something that will benefit us.” With this in mind, I’m giving in and giving myself permission to simply be,2 and see what I become when I don’t attach my self-worth to my accomplishments.

    That said, I won’t be surprised if someday I find myself, pausing with appreciation, as I place the final piece into a wrinkly, transparent rectangle.

    1. I’m very proud of myself for working that line into this post.
    2. Trying to, at least.
  • Flickering Lights and Simple Fixes

    For months, our bedroom light would not turn on. Well, at times it would, randomly illuminating the room whenever it seemed fit. And at others, turn off without warning. As far as we could tell, there were no signs of ghosts.

    We were having our attic turned into livable space and the contractors had recently put in subflooring. We figured they nicked a wire and that was causing the flickering.

    I read up on how to find where the wire might be compromised, thinking through how to identify potential causes and learning electrical diagrams. All I can recall now is something about tracing the neutral using some metering device I don’t have.

    A couple months later, we had an electrician come out for something else and asked him about it. He suggested we try changing the lightbulb. Really? That’s it?

    He explained that CFL lightbulbs, the ones that look like the spiral staircase of some futuristic space habitation, have a wire inside them. The jolting of the nail gun on the subfloor installation right above it probably caused that wire connection to come loose. The expansion and contraction of heat from the bulb would cause the wire to connect then detach in a slow cycle. On and off.

    He was right. We switched out the light bulb and it has been working as a light should ever since.

    Next time, whether it’s a light or a web development project, I’ll try the simple fix first.

  • Greenhouse Scorecard User Scripts

    Greenhouse Scorecard User Scripts

    Since joining the hiring team at Automattic, I’ve been using the recruiting/hiring software Greenhouse to score Code Tests and evaluate trial candidates.

    There are some things that have annoyed me a little about the site, so I wrote a few user scripts to improve my own time on Greenhouse.

    A user script is JavaScript that runs on a site after it’s loaded. There’s a regex pattern matching to see what site you want to run the script on, and then you can add your own JavaScript code in to do whatever you want. 🙂

    I use Tampermonkey to manage my user scripts.

    Auto-expanding textareas

    When you have a lot of text in a scorecard attribute, it won’t show you all of the content. The textarea is too small.

    Greenhouse scorecard with each textarea very small and hiding content.
    Before the user script.

    So, to see the full overview of everything you have to manually drag the little corner handle to expand all the textareas. That can be a slow, repetitive task. So, I wrote a script to automatically expand all textareas to show the full content on page load.

    Greenhouse scorecard with each textarea automatically expanded to fit the full contents of the field,
    After installing the user script, each textarea is expanded on load to show its full contents.

    Much better! Now I can see the full overview as soon as I go to the page without needing to do any extra work.

    Show/Hide Code Test Scorecard Sections

    There isn’t a way in Greenhouse to have separate scorecards for individual stages of the hiring process. So, any attributes you have on your interview scorecard will also show up on your code test scorecard, and so on.

    Our code test has a lot of individual attributes that are only applicable for the code test. This clutters your interview scorecard with a lot of things that are unnecessary for that stage.

    I wrote a user script that will look for the string “Code Test” in the Interview Plan/Scorecard title. If the scorecard title matches “Code Test,” then it will only show scorecard sections with “Code Test” in their titles.

    Greenhouse scorecard with Code Test - Top Priority and Code Test - Medium Priority sections.
    Code test scorecard after the user script

    This majorly cleans up the clutter across all scorecards.

    Installing the Scripts

    If you already have a user script manager like Tampermonkey, you can install the Auto-expand Scorecard Textareas and Show/Hide Code Test Scorecard Sections. There’s a bit more instructions at the GitHub repo if you want to modify it to work for your use case.

    Let me know if it’s useful for you or if you run into any issues with it!

  • Honesty in Anonymous vs Confidential Surveys

    Honesty in Anonymous vs Confidential Surveys

    I knew I needed to build some kind of survey to see if dropping the time limit from the code test would have any measurable impact on time spent or pressure. But I wasn’t sure if it should be anonymous or not.

    On one hand, I assumed the data for an anonymous survey would be more reliable as people would be more honest. On the other, we could get more info about the outcomes of the candidate if we knew who sent it.

    To figure out the best path, I asked myself two questions:

    • What am I measuring?
    • Are people more honest in an anonymous survey?

    What am I measuring?

    I wanted to see if removing the time limit had an impact on:

    • Time spent taking the test
    • Pressure felt from the test

    In my instance, knowing the outcome of the test (did they pass or not, do they end up being hired, etc), did not influence either of those pieces of data. While that extra info would be interesting, it would not help me answer my core questions. As a result, I felt anonymous was the best choice.

    Are people actually more honest in an anonymous survey?

    This decision relied on my assumption that people were more honest in an anonymous survey. I figured someone had thought about and researched this before.

    A quick search turned up the The Impact of Anonymity on Responses to Sensitive Questions by Anthony D. Ong and David J. Weiss, published in the Journal of Applied Social Psychology in 2000.

    They designed a study where they knew if people had cheated on a test or not, then asked them if they cheated under confidentiality vs anonymity. In confidentiality only 25% told the truth, while 75% told the truth under anonymity.

    The really interesting (and funny) part is how they designed the study. Basically, they wanted to see if people would self-report cheating in a scenario where they could tell if a person had actually cheated or not. 😈

    They told people they’d get $25 if they got a score better than 17/20 on a test with really difficult words. There was a dictionary amongst some books set out that the participant could access, but they didn’t mention this. They would know if the person cheated based on if the dictionary was moved or if a bookmark in it ended up in a different spot.

    Then, the pièce de résistance:

    In order to ensure that the words would be difficult enough to inspire cheating, we made up the last three words.

    The Impact of Anonymity on Responses to Sensitive Questions. p. 1698

    The whole study is quite clever and funny. It’s well worth a read.

    Anonymous is Best for Honesty

    In the end, I went with an anonymous survey because I needed to be able to trust the self-reported time and pressure results as much as possible. Anonymous surveys are more reliable in this sense, and the extra info gleaned from a confidential survey would not have helped me determine the core goal of the study.

  • The Bias of Timed Code Tests

    The Bias of Timed Code Tests

    I clearly remember the code test when going through the hiring process at Automattic. As someone with imposter syndrome and anxiety, the thought of having my code under a microscope, and confirming my fear of not being a “real” developer, isn’t exactly my idea of a fun time.

    But, I made it through, and was hired as a JavaScript Engineer last year.

    I recently switched over to the Hiring team, and my first task was to go through the code test again. The first time may have been stressful, but this time would be different, wouldn’t it?

    After all, I’d done the test before and there was no way for me to fail now. No pressure, no stress, right?

    Nope! I still felt extremely anxious doing the test.

    This made me wonder: Why did I still feel so much anxiety and pressure when I could have failed miserably and still been fine?

    The Psychology of Time Limited Tests

    In the instructions of our code test, we recommend a 6 hour time limit:

    We ask that you spend around 6 hours on this test (not counting any needed setup and/or research time) and that you complete it within one week of the test being sent to you. To be clear, please do not spend a full week of work on this. We don’t want to take up too much of your time.

    Even though it’s a recommendation, as soon as I read “6 hours,” a timer started clicking in the background of my mind.

    I played armchair psychologist and looked up a paper on what time-limited tests do to performance and how valid they are for evaluation. The paper talked a lot about a timed test vs an untimed power test. Our code test would be more like a power test intended to evaluate deeper skills, but we impose a non-restrictive time limit.

    tl;dr: Having a time-limit, even an artificial one, is biased and not so great for people’s performance.

    Time-Limited Tests Are Less Reliable

    “For nearly a century, we have known that students’ pace on an untimed power test does not validly reflect their performance.”

    They make it clear early on that speed does not equal skill or knowledge in an area. This has been studied with students in psychology, engineering, chemistry, finance, and more. Performance under time does not help evaluation because, “putting time limits on power tests introduces irrelevant variance

    The, “for nearly a century part,” is backed-up too. From a study done in 1914, they say:

    If we seek to evaluate the complex ‘higher’ mental functions, speed is not the primary index of efficiency, as is borne out by the evidence that speed and intelligence are not very highly correlated.”

    Finally, they make their recommendation for improving reliability very clear:

    “[…], we have known for decades that the best way to improve a time-limited test’s reliability is simply to remove its time limits.”

    Time-Limited Tests Are Less Inclusive and Less Equitable

    In the US, students with disabilities often get extended time on timed assessments. However, rarely do they actually use more than the standard time, and when they do, it’s generally only a small portion of the available extra time. In the paper, they say:

    When students request extended time or time and a half, what they are really requesting is not to feel the pressure of time ticking off; not to experience anxiety about running out of time; not to have [an untimed] power test administered as a [time-limited] test.”

    Furthermore, when most people are untimed, they are fairly efficient and accurate:

    “As we have known for a century: Many students, including those without disabilities, are ‘relatively inefficient in such timed … tests … [but] are able to do relatively efficient and accurate work when allowed to work more slowly.‘”

    After all of this, their final recommendation shouldn’t come as a much of a surprise:

    Remove all time limits from all higher educational tests intended to assess power. In addition to improving the tests’ validity, reliability, inclusivity, and equitability, removing time limits from power tests allows students to attenuate their anxiety (Faust, Ashcraft, & Fleck, 1996; Powers, 1986), increase their creativity (Acar & Runco, 2019; Cropley, 1972), read instructions more closely (Myers, 1960), check their work more carefully (Benjamin, Cavell, & Shallenberger, 1984), and learn more thoroughly from prior testing (Chuderski, 2016).

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7314377/

    So, if we really want to suggest a 6 hour limit to be respectful of their time, it’s better to give a test that takes around 6 hours (or less) to be fully complete (at a high quality) and not mention a time limit. That way, it takes 6-ish hours —and we don’t introduce all the negative side-effects of having a time limit.

    But we’re not really timing them

    For Automattic, 6 hours is a recommendation. We want to be respectful of people’s time, which is great. We don’t do anything to actually time them, and we make it clear they can go over the time limit. A lot of the studies don’t fully apply in our situation, but it doesn’t mean the time limit doesn’t have an impact.

    I had a person within my first few code test reviews mention they felt they could have done better, but went over the 6 hours. As in, they self-imposed the 6+ hour limit, even though we are not imposing it.

    Their test was incomplete.

    I can relate. I think one of the big reasons it affected me is that I felt like I wasn’t qualified if I couldn’t do the test within 6 hours. So I put that extra pressure on myself to prove I could. In the end, I think a lot of people disqualify themselves because they didn’t complete the test within 6 hours.

    So, do the people who submit incomplete or not-so-great tests do so because they can’t do it, or because they feel like they aren’t qualified if they can’t?

    Who is more likely to succeed on a time-limited test?

    In the spirit of inclusion, I also wondered who is more likely to succeed on time limited tests, and if that is a hidden bias built into our code test.

    The study above mentioned the benefits of removing time limits for many different people:

    “[…], numerous studies show that removing time limits boosts the performance of numerous students, including students who are learning English, students from underrepresented backgrounds, and students who are older than average. Removing time limits also attenuates stereotypic gender differences.”

    That’s a whopper. It’s worth reading again.

    Another study had this to say about the gender bias with time limited tests:

    “The effect is driven by a strong negative impact on females’ performance, while there is no statistically significant effect on males. […] Female students expect a lower grade when working under time pressure, while males do not.

    http://ftp.iza.org/dp8708.pdf

    So, if you’re working in a white, male dominated field like tech, and have a time limited test in your hiring process, it shouldn’t be a surprise if you keep hiring mostly white males.

    What are we doing about it?

    Since we’re not really timing them, it would be better to not mention a time limit which could add further pressure..

    So, that’s what we’re going to do.

    We’re drafting up new instructions that remove the time limit. We’re also giving out an anonymous survey to evaluate how much pressure candidates feel during the hiring process. We don’t expect this to fix everything, but we’ll keep working towards making it better.

    Everyone is different, and applying for jobs is clearly a high-stress environment, but the more we can do to put people at ease, the more accurate and inclusive our process will be. 

  • More than a Seat at the Table

    Pre-COVID when we could go to restaurants, there were times I’d sit down at the table unnoticed. The servers would walk by. After a few minutes, I’d wave to get their attention as they passed by again.

    It’s happened to all of us. It’s not a big deal.

    But what if the server continues to go to other tables? They never acknowledge you.

    You wave. You speak up. You’re there. You need help too.

    Maybe they eventually look over and nod a little sign of recognition. But they still don’t do anything. They never come by.

    Maybe they eventually briefly stop to tell you they can’t serve you. They don’t have the time. They don’t have the resources.


    I recently read Disability Visibility, and so many of the personal stories in the book shout out, “I’m here. I don’t need to be fixed. The world around me is broken.”

    Our world could be radically different and inclusive if it hadn’t been built by and for able-bodied people. But, for now, we live in an ableist world that ignores and hides away disability. Maybe those with disabilities can sit at the table, sometimes, but that doesn’t mean they’re acknowledged or that their needs are served by these ableist systems and structures.


    All this time, you’re still sitting at the table. Waving, speaking-up, doing your best to draw attention. This isn’t the first time, and it won’t be the last.

    Now imagine being reminded everyday that you live in a world that doesn’t consider you. A world that doesn’t value you. A world that says your needs aren’t important enough. A world that tries to contort you to fit inside it.

    This is the world we’ve built.


    I don’t have any answers, and I am not trying, as an able-bodied person, to speak on behalf of those with disabilities. I’m trying to share an idea that resonated with me in order to hopefully create more empathy and action amongst other able-bodied folks. We need a table where all are welcomed, included, and respected. Rather than listen to me, please check out Disability Visibility and follow disabled activists on Twitter like Alice Wong and Imani Barbarin