Skip to content

Initiate the web_hook_logs partitioning

What does this MR do?

Related issue: #323674 (closed)

The first step for partitioning the web_hook_logs table (&5558 (closed)) is to create the initial partitions (covering existing data), add the trigger that will keep the partitioned table in sync with the original table and schedule the background migration jobs that will copy all the existing data.

In detail, the following are covered:

  1. Create a new partitioned table web_hook_logs_part_0c5294f417 with a nearly identical schema as web_hook_logs. The only differences will be that the primary key is bigint (to address #276021 (closed)) and that the indexes will be missing (to facilitate faster inserts).
  2. Create all the monthly partitions that cover the existing data
  3. Install a trigger that keeps tables in sync - in particular it mirrors inserts, updates and deletes into web_hook_logs to the newly created table
  4. Schedule background migration jobs to copy data from web_hook_logs to the partitioned table
  5. Add WebHookLogPartitioned model and register it for dynamic partition creation
  6. Add explicit primary key definition to WebHookLog. This will not affect the current behavior of the application, but will facilitate the smooth transition when we are going to swap the tables two milestones after the partitioning is initiated.

Analysis

At 2021-03-01, web_hook_logs has ~527M records and is a total size of ~1TB

Table Rows Total size Table size Index(es) Size TOAST Size
web_hook_logs ~527M 1.02 TiB (10.46%) 713.02 GiB (13.37%) 42.26 GiB (1.10%) 279.01 GiB (38.56%)

The earliest record is from July 2020, which means that we expect that 16 partitions will be created initially (default one && 9 partitions with data for July 2020 to March 2021 && 6 future partitions):

gitlabhq_production=> SELECT created_at FROM web_hook_logs ORDER BY id asc LIMIT 1;
         created_at
----------------------------
 2020-07-26 23:40:36.382274

Given that our scheduling migration will schedule one BackFillPartitionedTable job per 2 minutes, copying 50K records in sub-batches of 2500, we expect the back-fill migration to take to complete:

527M / 50K = 10.540 jobs, one running each 2 minutes = 351.33 hours = 14.63 days

The 2500 sub-batch updates should have no issue to execute in a timely fashion (20 must execute in well under 2 minutes) and should cause no performance issues.

The select part of each sub-batch update needs 100-500ms in a production replica:

SELECT  id FROM web_hook_logs WHERE (id >= 600000000)  ORDER BY id ASC LIMIT 1;
SELECT  id FROM web_hook_logs WHERE (id >= 600000000)  ORDER BY id ASC LIMIT 1 OFFSET 2499;
    id
-----------
 600000000
 600002525

EXPLAIN (ANALYZE, BUFFERS) SELECT *
FROM web_hook_logs
WHERE id BETWEEN 600000000 AND 600002525;
                                                                   QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
-----
 Index Scan using web_hook_logs_pkey on web_hook_logs  (cost=0.57..2426.77 rows=1964 width=1414) (actual time=1.054..488.277 rows=2500 loop
s=1)
   Index Cond: ((id >= 600000000) AND (id <= 600002525))
   Buffers: shared hit=386 read=2123 written=335
   I/O Timings: read=462.483 write=9.667
 Planning Time: 0.100 ms
 Execution Time: 488.750 ms

----

EXPLAIN (ANALYZE, BUFFERS) SELECT *
FROM web_hook_logs
WHERE id BETWEEN 650000000 AND 650002848;

 Planning Time: 0.081 ms
 Execution Time: 107.436 ms

----

EXPLAIN (ANALYZE, BUFFERS) SELECT *
FROM web_hook_logs
WHERE id BETWEEN 700000000 AND 700002551;

 Planning Time: 0.083 ms
 Execution Time: 427.067 ms

----

EXPLAIN (ANALYZE, BUFFERS) SELECT *
FROM web_hook_logs
WHERE id BETWEEN 750000000 AND 750002536;

 Planning Time: 0.104 ms
 Execution Time: 57.825 ms

----

EXPLAIN (ANALYZE, BUFFERS) SELECT *
FROM web_hook_logs
WHERE id BETWEEN 800000003 AND 800002595;

 Planning Time: 0.195 ms
 Execution Time: 114.342 ms

Migrations - Local Execution

$ bundle exec rake db:migrate

== 20210306121300 PartitionWebHookLogs: migrating =============================
-- transaction_open?()
   -> 0.0000s
-- transaction()
-- table_exists?("web_hook_logs_part_0c5294f417")
   -> 0.0009s
-- transaction()
-- execute("CREATE TABLE web_hook_logs_part_0c5294f417 (\n  LIKE web_hook_logs INCLUDING ALL EXCLUDING INDEXES,\n  partition_key_bfab5f7bb7 timestamp without time zone NOT NULL,\n  PRIMARY KEY (id, partition_key_bfab5f7bb7)\n) PARTITION BY RANGE (partition_key_bfab5f7bb7)\n")
   -> 0.0023s
-- remove_column("web_hook_logs_part_0c5294f417", "created_at")
   -> 0.0008s
-- rename_column("web_hook_logs_part_0c5294f417", "partition_key_bfab5f7bb7", "created_at")
   -> 0.0018s
-- change_column_default("web_hook_logs_part_0c5294f417", "id", nil)
   -> 0.0016s
-- change_column("web_hook_logs_part_0c5294f417", "id", :bigint)
   -> 0.0011s
   -> 0.0084s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_000000")
   -> 0.0004s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_000000 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM (MINVALUE) TO ('2021-03-01')\n")
   -> 0.0027s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202103")
   -> 0.0004s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202103 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2021-03-01') TO ('2021-04-01')\n")
   -> 0.0025s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202104")
   -> 0.0004s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202104 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2021-04-01') TO ('2021-05-01')\n")
   -> 0.0024s
   -> 0.0196s
-- execute("CREATE  FUNCTION table_sync_function_29bc99d6db()\nRETURNS TRIGGER AS\n$$\nBEGIN\nIF (TG_OP = 'DELETE') THEN\n  DELETE FROM web_hook_logs_part_0c5294f417 where id = OLD.id;\nELSIF (TG_OP = 'UPDATE') THEN\n  UPDATE web_hook_logs_part_0c5294f417\n  SET web_hook_id = NEW.web_hook_id,\n    trigger = NEW.trigger,\n    url = NEW.url,\n    request_headers = NEW.request_headers,\n    request_data = NEW.request_data,\n    response_headers = NEW.response_headers,\n    response_body = NEW.response_body,\n    response_status = NEW.response_status,\n    execution_duration = NEW.execution_duration,\n    internal_error_message = NEW.internal_error_message,\n    updated_at = NEW.updated_at,\n    created_at = NEW.created_at\n  WHERE web_hook_logs_part_0c5294f417.id = NEW.id;\nELSIF (TG_OP = 'INSERT') THEN\n  INSERT INTO web_hook_logs_part_0c5294f417 (id,\n    web_hook_id,\n    trigger,\n    url,\n    request_headers,\n    request_data,\n    response_headers,\n    response_body,\n    response_status,\n    execution_duration,\n    internal_error_message,\n    updated_at,\n    created_at)\n  VALUES (NEW.id,\n    NEW.web_hook_id,\n    NEW.trigger,\n    NEW.url,\n    NEW.request_headers,\n    NEW.request_data,\n    NEW.response_headers,\n    NEW.response_body,\n    NEW.response_status,\n    NEW.execution_duration,\n    NEW.internal_error_message,\n    NEW.updated_at,\n    NEW.created_at);\nEND IF;\nRETURN NULL;\n\nEND\n$$ LANGUAGE PLPGSQL\n")
   -> 0.0019s
-- execute("COMMENT ON FUNCTION table_sync_function_29bc99d6db IS 'Partitioning migration: table sync for web_hook_logs table'")
   -> 0.0006s
-- current_schema()
   -> 0.0002s
-- execute("CREATE TRIGGER table_sync_trigger_b99eb6998c\nAFTER INSERT OR UPDATE OR DELETE ON web_hook_logs\nFOR EACH ROW\nEXECUTE FUNCTION table_sync_function_29bc99d6db()\n")
   -> 0.0008s
== 20210306121300 PartitionWebHookLogs: migrated (0.0344s) ====================

== 20210306121310 BackfillPartitionedWebHookLogs: migrating ===================
-- transaction_open?()
   -> 0.0000s
-- Scheduled 0 ::Gitlab::Database::PartitioningMigrationHelpers::BackfillPartitionedTable jobs with a maximum of 50000 records per batch and an interval of 120 seconds.

The migration is expected to take at least 0 seconds. Expect all jobs to have completed after 2021-03-06 12:57:20 UTC."
== 20210306121310 BackfillPartitionedWebHookLogs: migrated (0.0303s) ==========

We can see that the new partitioned tables has the same schema web_hook_logs with the exception of id being a bigint and the secondary indexes and FK missing (they will be added with #323676 (closed) in %13.11):

gitlabhq_development=# \d+ web_hook_logs_part_0c5294f417
                                         Table "public.web_hook_logs_part_0c5294f417"
         Column         |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description
------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
 id                     | bigint                      |           | not null |         | plain    |              |
 web_hook_id            | integer                     |           | not null |         | plain    |              |
 trigger                | character varying           |           |          |         | extended |              |
 url                    | character varying           |           |          |         | extended |              |
 request_headers        | text                        |           |          |         | extended |              |
 request_data           | text                        |           |          |         | extended |              |
 response_headers       | text                        |           |          |         | extended |              |
 response_body          | text                        |           |          |         | extended |              |
 response_status        | character varying           |           |          |         | extended |              |
 execution_duration     | double precision            |           |          |         | plain    |              |
 internal_error_message | character varying           |           |          |         | extended |              |
 updated_at             | timestamp without time zone |           | not null |         | plain    |              |
 created_at             | timestamp without time zone |           | not null |         | plain    |              |
Partition key: RANGE (created_at)
Indexes:
    "web_hook_logs_part_0c5294f417_pkey" PRIMARY KEY, btree (id, created_at)
Partitions: gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_000000 FOR VALUES FROM (MINVALUE) TO ('2021-03-01 00:00:00'),
            gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202103 FOR VALUES FROM ('2021-03-01 00:00:00') TO ('2021-04-01 00:00:00'),
            gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202104 FOR VALUES FROM ('2021-04-01 00:00:00') TO ('2021-05-01 00:00:00'),
            gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202105 FOR VALUES FROM ('2021-05-01 00:00:00') TO ('2021-06-01 00:00:00'),
            gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202106 FOR VALUES FROM ('2021-06-01 00:00:00') TO ('2021-07-01 00:00:00'),
            gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202107 FOR VALUES FROM ('2021-07-01 00:00:00') TO ('2021-08-01 00:00:00'),
            gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202108 FOR VALUES FROM ('2021-08-01 00:00:00') TO ('2021-09-01 00:00:00'),
            gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202109 FOR VALUES FROM ('2021-09-01 00:00:00') TO ('2021-10-01 00:00:00')

gitlabhq_development=# \d web_hook_logs
                                              Table "public.web_hook_logs"
         Column         |            Type             | Collation | Nullable |                  Default
------------------------+-----------------------------+-----------+----------+-------------------------------------------
 id                     | integer                     |           | not null | nextval('web_hook_logs_id_seq'::regclass)
 web_hook_id            | integer                     |           | not null |
 trigger                | character varying           |           |          |
 url                    | character varying           |           |          |
 request_headers        | text                        |           |          |
 request_data           | text                        |           |          |
 response_headers       | text                        |           |          |
 response_body          | text                        |           |          |
 response_status        | character varying           |           |          |
 execution_duration     | double precision            |           |          |
 internal_error_message | character varying           |           |          |
 created_at             | timestamp without time zone |           | not null |
 updated_at             | timestamp without time zone |           | not null |
Indexes:
    "web_hook_logs_pkey" PRIMARY KEY, btree (id)
    "index_web_hook_logs_on_created_at_and_web_hook_id" btree (created_at, web_hook_id)
    "index_web_hook_logs_on_web_hook_id" btree (web_hook_id)
Foreign-key constraints:
    "fk_rails_666826e111" FOREIGN KEY (web_hook_id) REFERENCES web_hooks(id) ON DELETE CASCADE
Triggers:
    table_sync_trigger_b99eb6998c AFTER INSERT OR DELETE OR UPDATE ON web_hook_logs FOR EACH ROW EXECUTE PROCEDURE table_sync_function_29bc99d6db()
$ bundle exec rake db:rollback
== 20210306121310 BackfillPartitionedWebHookLogs: reverting ===================
== 20210306121310 BackfillPartitionedWebHookLogs: reverted (0.0100s) ==========

$ bundle exec rake db:rollback
== 20210306121300 PartitionWebHookLogs: reverting =============================
-- transaction_open?()
   -> 0.0000s
-- execute("DROP TRIGGER IF EXISTS table_sync_trigger_b99eb6998c ON web_hook_logs")
   -> 0.0014s
-- execute("DROP FUNCTION IF EXISTS table_sync_function_29bc99d6db()")
   -> 0.0011s
-- drop_table("web_hook_logs_part_0c5294f417")
   -> 0.0091s
== 20210306121300 PartitionWebHookLogs: reverted (0.0190s) ====================

Migrations run on a production clone

By executing the migrations against a production clone, we can see that our assumptions are correct:

  1. Partitions will be created for [2020-07-01 - 2021-05-01]
  2. 10904 jobs will be scheduled, expected to run for 15,14 days (1308480 seconds).
  3. The scheduling migration will need to run for a maximum of 62 minutes (3726s)
== 20210306121300 PartitionWebHookLogs: migrating =============================
-- transaction_open?()
   -> 0.0000s
-- transaction()
-- table_exists?("web_hook_logs_part_0c5294f417")
   -> 0.0683s
-- transaction()
-- execute("CREATE TABLE web_hook_logs_part_0c5294f417 (\n  LIKE web_hook_logs INCLUDING ALL EXCLUDING INDEXES,\n  partition_key_bfab5f7bb7 timestamp without time zone NOT NULL,\n  PRIMARY KEY (id, partition_key_bfab5f7bb7)\n) PARTITION BY RANGE (partition_key_bfab5f7bb7)\n")
   -> 0.1382s
-- remove_column("web_hook_logs_part_0c5294f417", "created_at")
   -> 0.0477s
-- rename_column("web_hook_logs_part_0c5294f417", "partition_key_bfab5f7bb7", "created_at")
   -> 0.0829s
-- change_column_default("web_hook_logs_part_0c5294f417", "id", nil)
   -> 0.0712s
-- change_column("web_hook_logs_part_0c5294f417", "id", :bigint)
   -> 0.0367s
   -> 0.4128s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_000000")
   -> 0.0349s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_000000 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM (MINVALUE) TO ('2020-07-01')\n")
   -> 0.0515s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202007")
   -> 0.0367s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202007 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2020-07-01') TO ('2020-08-01')\n")
   -> 0.0465s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202008")
   -> 0.0381s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202008 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2020-08-01') TO ('2020-09-01')\n")
   -> 0.0406s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202009")
   -> 0.0345s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202009 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2020-09-01') TO ('2020-10-01')\n")
   -> 0.0385s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202010")
   -> 0.0350s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202010 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2020-10-01') TO ('2020-11-01')\n")
   -> 0.0372s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202011")
   -> 0.0349s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202011 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2020-11-01') TO ('2020-12-01')\n")
   -> 0.0380s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202012")
   -> 0.0344s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202012 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2020-12-01') TO ('2021-01-01')\n")
   -> 0.0387s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202101")
   -> 0.0343s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202101 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2021-01-01') TO ('2021-02-01')\n")
   -> 0.0430s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202102")
   -> 0.0349s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202102 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2021-02-01') TO ('2021-03-01')\n")
   -> 0.0391s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202103")
   -> 0.0345s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202103 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2021-03-01') TO ('2021-04-01')\n")
   -> 0.0384s
-- table_exists?("gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202104")
   -> 0.0344s
-- execute("CREATE TABLE gitlab_partitions_dynamic.web_hook_logs_part_0c5294f417_202104 PARTITION OF web_hook_logs_part_0c5294f417\nFOR VALUES FROM ('2021-04-01') TO ('2021-05-01')\n")
   -> 0.0390s
   -> 1.3624s
-- execute("CREATE  FUNCTION table_sync_function_29bc99d6db()\nRETURNS TRIGGER AS\n$$\nBEGIN\nIF (TG_OP = 'DELETE') THEN\n  DELETE FROM web_hook_logs_part_0c5294f417 where id = OLD.id;\nELSIF (TG_OP = 'UPDATE') THEN\n  UPDATE web_hook_logs_part_0c5294f417\n  SET web_hook_id = NEW.web_hook_id,\n    trigger = NEW.trigger,\n    url = NEW.url,\n    request_headers = NEW.request_headers,\n    request_data = NEW.request_data,\n    response_headers = NEW.response_headers,\n    response_body = NEW.response_body,\n    response_status = NEW.response_status,\n    execution_duration = NEW.execution_duration,\n    internal_error_message = NEW.internal_error_message,\n    updated_at = NEW.updated_at,\n    created_at = NEW.created_at\n  WHERE web_hook_logs_part_0c5294f417.id = NEW.id;\nELSIF (TG_OP = 'INSERT') THEN\n  INSERT INTO web_hook_logs_part_0c5294f417 (id,\n    web_hook_id,\n    trigger,\n    url,\n    request_headers,\n    request_data,\n    response_headers,\n    response_body,\n    response_status,\n    execution_duration,\n    internal_error_message,\n    updated_at,\n    created_at)\n  VALUES (NEW.id,\n    NEW.web_hook_id,\n    NEW.trigger,\n    NEW.url,\n    NEW.request_headers,\n    NEW.request_data,\n    NEW.response_headers,\n    NEW.response_body,\n    NEW.response_status,\n    NEW.execution_duration,\n    NEW.internal_error_message,\n    NEW.updated_at,\n    NEW.created_at);\nEND IF;\nRETURN NULL;\n\nEND\n$$ LANGUAGE PLPGSQL\n")
   -> 0.0583s
-- execute("COMMENT ON FUNCTION table_sync_function_29bc99d6db IS 'Partitioning migration: table sync for web_hook_logs table'")
   -> 0.0832s
-- current_schema()
   -> 0.0341s
-- execute("CREATE TRIGGER table_sync_trigger_b99eb6998c\nAFTER INSERT OR UPDATE OR DELETE ON web_hook_logs\nFOR EACH ROW\nEXECUTE FUNCTION table_sync_function_29bc99d6db()\n")
   -> 0.0471s
== 20210306121300 PartitionWebHookLogs: migrated (10.3521s) ===================

== 20210306121310 BackfillPartitionedWebHookLogs: migrating ===================
-- transaction_open?()
   -> 0.0000s
-- Scheduled 10904 ::Gitlab::Database::PartitioningMigrationHelpers::BackfillPartitionedTable jobs with a maximum of 50000 records per batch and an interval of 120 seconds.
The migration is expected to take at least 1308480 seconds. Expect all jobs to have completed after 2021-03-23 22:10:03 UTC."
== 20210306121310 BackfillPartitionedWebHookLogs: migrated (3726.1946s) =======

Sample query plans of batched inserts using production clone

Similar setup:

exec CREATE TABLE web_hook_logs_part_0c5294f417 (
    id bigint NOT NULL,
    web_hook_id integer NOT NULL,
    trigger character varying,
    url character varying,
    request_headers text,
    request_data text,
    response_headers text,
    response_body text,
    response_status character varying,
    execution_duration double precision,
    internal_error_message character varying,
    updated_at timestamp without time zone NOT NULL,
    created_at timestamp without time zone NOT NULL,
    
    PRIMARY KEY (id, created_at)
)
PARTITION BY RANGE (created_at);

exec CREATE TABLE web_hook_logs_part_0c5294f417_000000 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM (MINVALUE) TO ('2020-07-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202007 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2020-07-01') TO ('2020-08-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202008 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2020-08-01') TO ('2020-09-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202009 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2020-09-01') TO ('2020-10-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202010 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2020-10-01') TO ('2020-11-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202011 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2020-11-01') TO ('2020-12-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202012 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2020-12-01') TO ('2021-01-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202101 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2021-01-01') TO ('2021-02-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202102 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2021-02-01') TO ('2021-03-01');
exec CREATE TABLE web_hook_logs_part_0c5294f417_202103 PARTITION OF web_hook_logs_part_0c5294f417 FOR VALUES FROM ('2021-03-01') TO ('2021-04-01');

Query plans during a pretty popular time (~20 instances active), the execution times are consistent with what we would expect to see in our production clone and on par with our worst case expectations:

explain INSERT INTO web_hook_logs_part_0c5294f417
SELECT *
FROM web_hook_logs
WHERE id BETWEEN 600000000 AND 600002525
FOR UPDATE
ON CONFLICT (id, created_at) DO NOTHING;

 ModifyTable on public.web_hook_logs_part_0c5294f417  (cost=0.57..3033.49 rows=2428 width=1417) (actual time=2441.354..2441.355 rows=0 loops=1)
   Buffers: shared hit=22957 read=2866 dirtied=2687
   I/O Timings: read=2183.762
   ->  Subquery Scan on *SELECT*  (cost=0.57..3033.49 rows=2428 width=1417) (actual time=8.767..1963.278 rows=2500 loops=1)
         Buffers: shared hit=5359 read=2142 dirtied=2131
         I/O Timings: read=1905.124
         ->  LockRows  (cost=0.57..3003.14 rows=2428 width=1419) (actual time=8.764..1957.982 rows=2500 loops=1)
               Buffers: shared hit=5359 read=2142 dirtied=2131
               I/O Timings: read=1905.124
               ->  Index Scan using web_hook_logs_pkey on public.web_hook_logs  (cost=0.57..2978.86 rows=2428 width=1419) (actual time=8.553..1924.873 rows=2500 loops=1)
                     Index Cond: ((web_hook_logs.id >= 600000000) AND (web_hook_logs.id <= 600002525))
                     Buffers: shared hit=369 read=2140 dirtied=2
                     I/O Timings: read=1904.931

 Time: 2.448 s  
  - planning: 6.727 ms  
  - execution: 2.441 s (estimated* for prod: 0.301...2.446 s)  
    - I/O read: 2.184 s  
    - I/O write: N/A  
  
 Shared buffers:  
  - hits: 22957 (~179.40 MiB) from the buffer pool  
  - reads: 2866 (~22.40 MiB) from the OS file cache, including disk I/O  
  - dirtied: 2687 (~21.00 MiB)  
  - writes: 0  
  
----

explain INSERT INTO web_hook_logs_part_0c5294f417
SELECT *
FROM web_hook_logs
WHERE id BETWEEN 650000000 AND 650002848
FOR UPDATE
ON CONFLICT (id, created_at) DO NOTHING;

Time: 501.432 ms  
  - planning: 0.144 ms  
  - execution: 501.288 ms (estimated* for prod: 0.113...0.511 s)  
    - I/O read: 391.854 ms  
    - I/O write: N/A  
  
Shared buffers:  
  - hits: 27868 (~217.70 MiB) from the buffer pool  
  - reads: 1471 (~11.50 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1242 (~9.70 MiB)  
  - writes: 0  
  
----

explain INSERT INTO web_hook_logs_part_0c5294f417
SELECT *
FROM web_hook_logs
WHERE id BETWEEN 700000000 AND 700002551
FOR UPDATE
ON CONFLICT (id, created_at) DO NOTHING;

Time: 2.242 s  
  - planning: 0.344 ms  
  - execution: 2.241 s (estimated* for prod: 0.186...2.265 s)  
    - I/O read: 2.048 s  
    - I/O write: N/A  
  
Shared buffers:  
  - hits: 24047 (~187.90 MiB) from the buffer pool  
  - reads: 2749 (~21.50 MiB) from the OS file cache, including disk I/O  
  - dirtied: 2415 (~18.90 MiB)  
  - writes: 0  
    
----

explain INSERT INTO web_hook_logs_part_0c5294f417
SELECT *
FROM web_hook_logs
WHERE id BETWEEN 750000000 AND 750002536
FOR UPDATE
ON CONFLICT (id, created_at) DO NOTHING;

Time: 920.913 ms  
  - planning: 0.163 ms  
  - execution: 920.750 ms (estimated* for prod: 0.254...0.968 s)  
    - I/O read: 743.003 ms  
    - I/O write: N/A  
  
Shared buffers:  
  - hits: 24887 (~194.40 MiB) from the buffer pool  
  - reads: 1347 (~10.50 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1194 (~9.30 MiB)  
  - writes: 0  

Does this MR meet the acceptance criteria?

Conformity

Availability and Testing

Security

If this MR contains changes to processing or storing of credentials or tokens, authorization and authentication methods and other items described in the security review guidelines:

  • [-] Label as security and @ mention @gitlab-com/gl-security/appsec
  • [-] The MR includes necessary changes to maintain consistency between UI, API, email, or other methods
  • [-] Security reports checked/validated by a reviewer from the AppSec team

Related to #323674 (closed)

Edited by Yannis Roussos

Merge request reports

Loading