Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate post-deployment smoke tests for releases #493

Open
Matt-Yorkley opened this issue Aug 6, 2019 · 5 comments
Open

Automate post-deployment smoke tests for releases #493

Matt-Yorkley opened this issue Aug 6, 2019 · 5 comments

Comments

@Matt-Yorkley
Copy link
Collaborator

We should investigate/discuss the best way to automate smoke-tests.

@Matt-Yorkley
Copy link
Collaborator Author

@sauloperez we need to create RachelBot before the real @RachL goes on holiday :trollface:

@Matt-Yorkley
Copy link
Collaborator Author

Matt-Yorkley commented Aug 6, 2019

Rough ideas for this:

  1. We can add a task in the deploy playbook that runs when the environment is production that tests the live homepage of the site after the deployment has finished. It can check that the response code is 200 and that some content we expect to be there is present in the HTML response. We can also include a message with the results of that check in the Slack notification, which would be quite nice. Obviously that's pretty basic, but it would take 5 minutes to implement.

  2. I think Selenium can be configured to point to a live server instead of a local test server. We could potentially set up a Travis type build with a customised config, that runs through a subset of our existing feature tests (so we don't have to create a new set of tests from scratch, and they stay up-to-date). I'm not sure how difficult this would be.

@RachL
Copy link
Contributor

RachL commented Aug 6, 2019

Pleaaaase don't call it that wayy 🙀 🤣

We've spoken about other tools as well that are complementary to Selenium like Fitnesse here:
https://community.openfoodnetwork.org/t/seed-data-development-provisioning-deployment/910/5?u=rachel

I don't think we need to rush things before I leave for holidays. I can dedicate sometime on it Thursday and Friday, but I would really love to take the time to choose the right tool.

We need to have tools that enables us to grow this kind of tests coverage at a regular rhythm. And I would love it to be a tester task rather than a dev task. So this would be one of my criteria to choose the tool.

What do you think @lin-d-hop

@filipefurtad0
Copy link
Contributor

Hey @Matt-Yorkley ,

I'm wondering if it would be possible to run some already existing specs directly on our staging or live servers. For example, if we set Capybara.app_host = pointing out to our servers perhaps we could cherry-pick some specs and adapt them for smoke tests.

Would this be an approach to address point 2 you mentioned?

@sauloperez
Copy link
Contributor

theoretically yes @filipefurtad0 . Something that needs to explored. The adaptation work I see already is around the dummy data we create such as enterprises, users, orders, etc. Our specs already handle cleaning that up after the test execution but I'm sure there are issues we haven't anticipated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: All the things 💤
Development

No branches or pull requests

4 participants