Skip to content

Unconfirm wrongfully verified email records

Adam Hegyi requested to merge unconfirm-wrongfully-verified-email-records into master

Related issues:

This MR takes care of cleaning up wrongfully confirmed email records.

The MR does the following:

  • Iterates over all Email records in a background migration.
  • For each batch, look for wrongfully verified email addresses.
  • Unconfirm the email address and the primary email address (since you can easily switch primary emails).
  • Send the confirmation email for both emails.
  • Send an extra explanation email.

Notes:

  • Note 1: We only do this for active users, we skip bots.
  • Note 2: We might send the informing emails multiple times to the same user. In case they have more than one email records affected.
  • Note 3: I had to modify the secondary email confirm template to avoid confusion and not just handle the case where user adds an email. See the change in app/views/devise/mailer/_confirmation_instructions_secondary.html.haml

Explanation email:

The BG migration updates the confirmed_at and the confirmation_token attributes in batches in order to avoid large volume of record updates.

I verified in devise that a custom-generated confirmation_token can be confirmed (there is no extra validation).

email = Email.find(1) # where confirmation_token is set by us
Email.confirm_by_token(email.confirmation_token)

email.reload.confirmed? # returns true

Generating confirmation tokens:

Tokens need to be unique since there is a unique index on the column. For generating the confirmation token, we'll use the md5 helper method in PG. We add several attributes in order to prevent users "guessing" the token.

md5(emails.id::varchar || emails.created_at || users.encrypted_password || '#{Integer(Time.now.to_i)}') as md5_string

  • id - unique
  • emails.created_at
  • encrypted_password - column is not accessible from the app at all
  • current unix epoch - in case there is an md5 collision (very unlikely), the next retry will have different input string

Queries

We iterate over all the emails when triggering a BG job which will create 400+ jobs.

  • Batch size: 1K
  • Interval: 5 minutes
  • Records to be updated: 100K * 2 (100K email records and about 100K user records)
  • Total runtime: Iterating 236714 email records, ~20 hours

We use low batch size and high(er) interval because we're sending emails.

Query for finding affected users (batch that contains large volume of items to be migrated):

SELECT "emails".* FROM "emails" 
INNER JOIN "users" ON "users"."id" = "emails"."user_id" 
WHERE 
"users"."state" = 'active' AND 
"users"."user_type" IS NULL AND 
"emails"."id" BETWEEN 425000 AND 426000
AND (emails.confirmed_at IS NOT NULL)
AND (emails.confirmed_at = users.confirmed_at) 
AND (emails.email <> users.email)

Plan

Note: I tried to add a specialized index on users, but it seems that the planner prefers the pkey.

I used the following query to find the "heaviest" batch:

SELECT emails.id / 1000, count(*)
FROM users
INNER JOIN emails ON emails.user_id=users.id
WHERE 
  users.state IN ('active') 
  AND (users.user_type IS NULL OR users.user_type IN (NULL, 6, 4))
  AND users.confirmed_at IS NOT NULL
  AND emails.confirmed_at IS NOT NULL
  AND emails.confirmed_at=users.confirmed_at
  AND users.email <> emails.email
group by emails.id / 1000
order by 2 desc
limit 10;

Email update query:

WITH md5_strings as (
  SELECT emails.id as email_id, md5(emails.id::varchar || emails.created_at || users.encrypted_password || '1592547886') as md5_string 
  FROM "emails" 
  INNER JOIN "users" ON "users"."id" = "emails"."user_id" 
  WHERE 
  "users"."state" = 'active' AND 
  "users"."user_type" IS NULL AND 
  "emails"."id" BETWEEN 425000 AND 426000 AND 
  (emails.confirmed_at IS NOT NULL) AND 
  (emails.confirmed_at = users.confirmed_at) AND 
   (emails.email <> users.email)  
)
UPDATE "emails"
SET confirmed_at = NULL, confirmation_token=md5_strings.md5_string
FROM md5_strings
WHERE id = md5_strings.email_id

Plan

User update query

UPDATE "users" 
SET confirmed_at = NULL, 
confirmation_token=md5(users.id::varchar || users.created_at || users.encrypted_password || '1592547886') 
WHERE "users"."id" IN (worst case: 1K user ids)

Worst case: updating 1k user records. Finds records by primary key.

Verification

Snippet to generate "bad" data in your local environment:

user1 = FactoryBot.create(:user, username: 'emailverificationtest1', email: 'emailverificationtest1@test.com', confirmed_at: Time.now)
user2 = FactoryBot.create(:user, username: 'emailverificationtest2', email: 'emailverificationtest2@test.com', confirmed_at: Time.now)

[user1, user2].each do |user|
  email = user.emails.create!(email: "#{user.username}.secondemail@test.com")
  email.update(confirmed_at: user.confirmed_at)
end

After executing the snippet, you can run rake db:migrate. The mailer job output should show up in the development.log file (assuming that sidekiq is running).

Conformity

Availability and Testing

I tested the email sending locally and verified that the emails we send confirmation links.

Security

If this MR contains changes to processing or storing of credentials or tokens, authorization and authentication methods and other items described in the security review guidelines:

  • Label as security and @ mention @gitlab-com/gl-security/appsec
  • The MR includes necessary changes to maintain consistency between UI, API, email, or other methods
  • Security reports checked/validated by a reviewer from the AppSec team
Edited by Adam Hegyi

Merge request reports

Loading