This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- May 11, 2022
-
-
dpsutton authored
-
- May 02, 2022
-
-
Noah Moss authored
-
dpsutton authored
* dir locals for api/let-404 * Driver supports persisted model * PersistedInfo model far easier to develop this model with the following sql: ```sql create table persisted_info ( id serial primary key not null ,db_id int references metabase_database(id) not null ,card_id int references report_card(id) not null ,question_slug text not null ,query_hash text not null ,table_name text not null ,active bool not null ,state text not null ,UNIQUE (db_id, card_id) ) ``` and i'll make the migration later. Way easier to just dorp table, \i persist.sql and keep developing without worrying about the migration having changed so it can't rollback, SHAs, etc * Persisting api (not making/deleting tables yet) http POST "localhost:3000/api/card/4075/persist" Cookie:$COOKIE -pb http DELETE "localhost:3000/api/card/4075/persist" Cookie:$COOKIE -pb useful from commandline (this is httpie) * Pull format-name into ddl.i * Postgres ddl * Hook up endpoints * move schema-name into interface * better jdbc connection management * Hotswap peristed tables into qp * clj-kondo fixes * docstrings * bad alias in test infra * goodbye testing format-name function left over. everything uses ddl.i/format-name and this rump was left * keep columns in persisted info columns that are in the persisted query. I thought about a tuple of [col-name type] instead of just the col-name. I didn't do this this type because I want to ensure that we compute the db-type in ONLY ONE WAY ever and i wasn't ready to commit to that yet. I'm not sure this is necessary in the future so it remains out now. Context: we hot-swap the persisted table in for the original query. Match up on query hash remaining the same. It continues to use the metadata from the original query and just `select cols from table` * Add migration for persisted_info table also removes the db_id. Don't know why i was thinking that was necessary. also means we don't need another unique constraint on (db_id, card_id) since we can just mark the card_id as unique. no idea what i was thinking. * fix ns in a sad manner :( far better to just have no alias to indicate it is required for side effects. * Dont hardcode a card-id :(:(:( my B * copy the PersistedInfo * ns cleanup, wrong alias, reflection warning * Check that state of persisted_info is persisted * api to enable persistence on a db i'm not wild about POST /api/database/:id/persist and POST /api/database/:id/unpersist but carrying on. left a note about it. So now you can enable persistence on a db, enable persistence on a model by posting to api/card/:id/persist and everything works. What does not work yet is the unpersisting or re-persisting of models when using the db toggle. * Add refresh_begin and refresh_end to persisted_info This information helps us with two bits: - when we need to chunk refreshing models, this lets us order by staleness so we can refresh a few models and pick up later - if we desire, we can look at the previous elapsed time of refreshes and try to gauge amount of work we want. This gives us a bit of look-ahead. We can of course track our progress as we go but there's no way to know if the next refresh might take an hour. This gives us a bit of insight. * Refresh tables every 8 hours ("0 0 0/8 * * ? *") Tables are refreshed every 8 hours. There is one single job doing this named "metabase.task.PersistenceRefresh.job" but it has 0 triggers by default. Each database with persisted models will add a trigger to this to refresh those models every 8 hours. When you unpersist a model, it will immediately remove the table and then delete the persisted_info record. When you mark a database as persist false, it will immediately mark all persisted_info rows as inactive and deleteable, and unschedule its trigger. A background thread will then start removing the tables. * Schedule refreshing on startup, watching for already scheduled does not allow for schedule changes but that's a future endeavor * appease our linter overlords * Dynamic var to inbhit persistence when refreshing also, it checked the state against "active" instead of "persisted" which is really freaky. how has this worked in the past if thats the case? * api docstrings on card persist * docstring * Don't sync the persisted schemas * Fix bad sql when no deleteable rows getting error with bad sql when there were no ids * TaskHistory for refreshing * Add created_at to persist_info table helpful if this ever ends up in the audit section * works on redshift hooked up the hierarchy and redshift is close enought that it just works * Remove persist_info record after deleting "deleteable" * Better way to check that something exists * POST /api/<card-id>/refresh api to refresh a model's persisted record * return a 204 from refreshing * Add buttons to persist/unpersist a database and a model for PoC (#21344) * Redshift and postgres report true for persist-models there are separate notions of persistence is possible vs persistence is enabled. Seems like we're just gonna check details for enabled and rely on the driver multimethod for whether it is possible. * feature for enabled, hydrate card with persisted two features: :persist-models for which dbs support it, and :persist-models-enabled for when that option is enabled. POST to api/<card-id>/unpersist hydrate persisted on cards so FE can display persist/unpersist for models * adjust migration number * remove deferred-tru
* conditionally hydrate persisted on models only * Look in right spot for persist-models-enabled * Move persist enabled into options not details changing details recomposes the pool, which is especially bad now that we have refresh tasks going on reusing the same connection * outdated comment * Clean up source queries from persisted models their metadata might have had [:field 19 nil] field_refs and we should substitute just [:field "the-name" {:base-type :type/Whatever-type} since it will be a select from a native query. Otherwise you get the following: ``` 2022-03-31 15:52:11,579 INFO api.dataset :: Source query for this query is Card 4,088 2022-03-31 15:52:11,595 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. 2022-03-31 15:52:11,623 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected. ``` I think its complaining that that table is not joined in the query and giving up. While doing this i see we are hitting the database a lot: ``` 2022-03-31 22:52:18,838 INFO api.dataset :: Source query for this query is Card 4,111 2022-03-31 22:52:18,887 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c 2022-03-31 22:52:18,918 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c 2022-03-31 22:52:18,930 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c ``` I tried to track down why we are doing this so much but couldn't get there. I think I need to ensure that we are using the query store annoyingly :( * Handle native queries didn't nest the vector in the `or` clause correctly. that was truthy only when the mbql-query local was truthy. Can't put the vector `[false mbql-query]` there and rely on that behavior * handle datetimetz in persisting * Errors saved into persisted_info * Reorder migrations to put v43.00-047 before 048 * correct arity mismatch in tests * comment in refresh task * GET localhost:3000/api/persist Returns persisting information: - most information from the `persist_info` table. Excludes a few columns (query_hash, question_slug, created_at) - adds database name and card name - adds next fire time from quartz scheduling ```shell ❯ http GET "localhost:3000/api/persist" Cookie:$COOKIE -pb [ { "active": false, "card_name": "hooking reviews to events", "columns": [ "issue__number", "actor__login", "user__login", "submitted_at", "state" ], "database_id": 19, "database_name": "pg-testing", "error": "No method in multimethod 'field-base-type->sql-type' for dispatch value: [:postgres :type/DateTimeWithLocalTZ]", "id": 4, "next-fire-time": "2022-04-06T08:00:00.000Z", "refresh_begin": "2022-04-05T20:16:54.654283Z", "refresh_end": "2022-04-05T20:16:54.687377Z", "schema_name": "metabase_cache_1e483_19", "state": "error", "table_name": "model_4077_hooking_re" }, { "active": true, "card_name": "redshift Categories", "columns": [ "catid", "catgroup", "catname", "catdesc" ], "database_id": 229, "database_name": "redshift", "error": null, "id": 3, "next-fire-time": "2022-04-06T08:00:00.000Z", "refresh_begin": "2022-04-06T00:00:01.242505Z", "refresh_end": "2022-04-06T00:00:01.825512Z", "schema_name": "metabase_cache_1e483_229", "state": "persisted", "table_name": "model_4088_redshift_c" } ] ``` * include card_id in /api/persist * drop table if exists * Handle rescheduling refresh intervals There is a single global value for the refresh interval. The API requires it to be 1<=value<=23. There is no validation if someone changes the value in the db or with an env variable. Setting this to a nonsensical value could cause enormous load on the db so they shouldn't do that. On startup, unschedule all tasks and then reschedule them to make sure that they have the latest value. One thing to note: there is a single global value but i'm making a task for each database. Seems like an obvious future enhancement so I don't want to deal with migrations. Figure this gives us the current spec behavior to have a trigger for each db with the same value and lets us get more interesting using the `:options` on the database in the future. * Mark as admin not internal lets it show up in `api/setting/` . I'm torn on how special this value is. Is it the setting code's requirement to invoke the reschedule refresh triggers or should that be on the setting itself. It feels "special" and can do a lot of work from such just setting an integer. There's a special endpoint to set it which is aware, and thus would be a bit of an error to set this setting through the more traditional setting endpoint * Allow for "once a day" refresh interval * Global setting to enable/disable post api/persist/enable post api/persist/disable enable allows for other scheduling operations (enabling on a db, and then on a model). Disable will - update each enabled database and disable in options - update each persisted_info record and set it inactive and state deleteable - unschedule triggers to refresh - schedule task to unpersist each model (deleting table and associated pesisted_info row) * offset and limits on persisted info list ```shell http get "localhost:3000/api/persist?limit=1&offset=1" Cookie:$COOKIE -pb { "data": [ { "active": true, "card_id": 4114, "card_name": "Categories from redshift", "columns": [ "catid", "catgroup", "catname", "catdesc" ], "database_id": 229, "database_name": "redshift", "error": null, "id": 12, "next-fire-time": "2022-04-08T00:00:00.000Z", "refresh_begin": "2022-04-07T22:12:49.209997Z", "refresh_end": "2022-04-07T22:12:49.720232Z", "schema_name": "metabase_cache_1e483_229", "state": "persisted", "table_name": "model_4114_categories" } ], "limit": 1, "offset": 1, "total": 2 } ``` * Include collection id, name, and authority level * Include creator on persisted-info records * Add settings to manage model persistence globally (#21546) * Common machinery for running steps * Add model cache refreshes monitoring page (#21551) * don't do shenanigans * Refresh persisted and error persisted_info rows * Remarks on migration column * Lint nits (sorted-ns and docstrings) * Clean up unused function, docstring * Use `onChanged` prop to call extra endpoints (#21593) * Tests for persist-refresh * Reorder requires * Use quartz for individual refreshing for safety switch to using one-off jobs to refresh individual tables. Required adding some job context so we know which type to run. Also, cleaned up the interface between ddl.interface and the implementations. The common behaviors of advancing persisted-info state, setting active, duration, etc are in a public `persist!` function which then calls to the multimethod `persist!*` function for just the individual action on the cached table. Still more work to be done: - do we want creating and deleting to be put into this type of system? Quite possible - we still don't know if a query is running against the cached table that can prevent dropping the table. Perhaps using some delay to give time for any running query to finish. I don't think we can easily solve this in general because another instance in the cluster could be querying against it and we don't have any quick pub/sub type of information sharing. DB writes would be quite heavy. - clean up the ddl.i/unpersist method in the same way we did refresh and persist. Not quite clear what to do about errors, return values, etc. * Update tests with more job-info in context * Fix URL type conflicts * Whoops get rid of our Thread/sleep test :) * Some tests for the new job-data, clean up task history saving * Fix database model persistence button states (#21636) * Use plain database instance on form * Fix DB model persistence toggle button state * Add common `getSetting` selector * Don't show caching button when turned off globally * Fix text issue * Move button from "Danger zone" * Fix unit test * Skip default setting update request for model persistence settings (#21669) * Add a way to skip default setting update request * Skip default setting update for persistence * Add changes for front end persistence - Order by refresh_begin descending - Add endpoint /persist/:persisted-info-id for fetching a single entry. * Move PersistInfo creation into interface function * Hide model cache monitoring page when caching is turned off (#21729) * Add persistence setting keys to `SettingName` type * Conditionally hide "Tools" from admin navigation * Conditionally hide caching Tools tab * Add route guard for Tools * Handle missing settings during init * Add route for fetching persistence by card-id * Wrangling persisted-info states Make quartz jobs handle any changes to database. Routes mark persisted-info state and potentially trigger jobs. Job read persisted-info state. Jobs - Prune -- deletes PersistedInfo `deleteable` -- deletes cache table - Refresh -- ignores `deletable` -- update PersistedInfo `refreshing` -- drop/create/populate cache table Routes card/x/persist - creates the PersistedInfo `creating` - trigger individual refresh card/x/unpersist - marks the PersistedInfo `deletable` database/x/unpersist - marks the PersistedInfos `deletable` - stops refresh job database/x/persist - starts refresh job /persist/enable - starts prune job /persist/disable - stops prune job - stops refresh jobs - trigger prune once * Save the definition on persist info This removes the columns and query_hash columns in favor of definition. This means, that if the persisted understanding of the model is different than the actual model during fetch source query we won't substitute. This makes sure we keep columns and datatypes in line. * Remove columns from api call * Add a cache section to model details sidebar (#21771) * Extract `ModelCacheRefreshJob` type * Add model cache section to sidebar * Use `ModelCacheRefreshStatus` type name * Add endpoint to fetch persistence info by model ID * Use new endpoint at QB * Use `CardId` from `metabase-types/api` * Remove console.log * Fix `getPersistedModelInfoByModelId` selector * Use `t` instead of `jt` * Provide seam for prune testing - Fix spelling of deletable * Include query hash on persisted_info we thought we could get away with just checking the definition but that is schema shaped. So if you changed a where clause we should invalidate but the definition would be the same (same table name, columns with types). * Put random hash in PersistedInfo test defaults * Fixing linters * Use new endpoint for model cache refresh modal (#21742) * Use new endpoint for cache status modal * Update refresh timestamps on refresh * Move migration to 44 * Dispatch on initialized driver * Side effects get bangs! * batch hydrate :persisted on cards * bang on `models.persisted-info/make-ready!` * Clean up a doc string * Random fixes: docstrings, make private, etc * Bangs on side effects * Rename global setting to `persisted-models-enabled` felt awkward (enabled-persisted-models) and renamed to make it a bit more natural. If you are developing you need to set the new value to true and then your state will stay the same * Rename parameter for site-uuid-str for clarity * Lint cleanups interesting that the compojure one is needed for clj-kondo. But i guess it makes sense since there is a raw `GET` in `defendpoint`. * Docstring help * Unify type :type/DateTimeWithTZ and :type/DateTimeWithLocalTZ both are "TIMESTAMP WITH TIME ZONE". I had got an error and saw that the type was timestamptz so i used that. They are synonyms although it might require an extension. * Make our old ns linter happy Co-authored-by:Alexander Polyankin <alexander.polyankin@metabase.com> Co-authored-by:
Anton Kulyk <kuliks.anton@gmail.com> Co-authored-by:
Case Nelson <case@metabase.com>
-
- Apr 26, 2022
-
-
Maz Ameli authored
* build pot file * update translations for 0.43
-
- Apr 25, 2022
-
-
Case Nelson authored
In H2 the column type for `revision.timestamp` was `["TIMESTAMP" "TIMESTAMP NOT NULL"]` This change will ensure it is correctly `["TIMESTAMP" "TIMESTAMP WITH TIME ZONE NOT NULL"]` Postgres remains as `timestamp with time zone`
-
- Apr 23, 2022
-
-
Alexander Polyankin authored
-
- Apr 22, 2022
-
-
Alexander Polyankin authored
-
Benoit Vinay authored
-
- Apr 20, 2022
-
-
Ryan Laurie authored
-
- Apr 19, 2022
-
-
adam-james authored
* Disallow nil for timeline/event icons * Test defaults include icons
-
- Apr 18, 2022
-
-
Maz Ameli authored
* change the General permissions to Application * rename general permissions to application permissions * BE: Rename General Perms to Application Perms (#21709) * BE: Change General Perms to Application Perms * lint migration file * add migration to update seq name * update application perms graph endpoint in fe Co-authored-by:
Aleksandr Lesnenko <alxnddr@gmail.com> Co-authored-by:
Ngoc Khuat <qn.khuat@gmail.com>
-
- Apr 15, 2022
-
-
Nick Fitzpatrick authored
* Updating favicon and changing toaster/notification behavior
-
- Apr 13, 2022
-
-
Case Nelson authored
* Add bookmark ordering Bookmark ordering is applied to bookmark fetches order-by clauses. To save ordering send a sequence of `[{type, item_id}]` values. There are no foreign keys to the `*_bookmark` tables and bookmark ordering handle extra or missing rows gracefully by ignoring them or falling back to order by created_at as a secondary sort. Pruning is done on save. * Fix missing docstring * Fix refer order * Use varchar for type because h2 is unhappy with text index * Cast untyped literal in union for h2 * Use hx/literal thanks @dpsutton
-
Ngoc Khuat authored
* add is_group_manager and hydrate it * update docstring and make sure the is-group-manager? is converted to boolean in any db
-
- Apr 08, 2022
-
-
adam-james authored
* Add `:is_installer` key to users returned from api/user/current * Add `:has_question_and_dashboard` Key to current users via hydration This is also part of the starting page project, it's a key that helps the frontend serve more specific data to the user. * Moved `permissions-set` function up so I can use it in hydration * Adjust recents query and first pass at popular items query Recents: Before, the recent query would look at the ViewLog, which records a view on cards even when it was only 'viewed' via a dashboard. For dashboards and tables, we don't need to make a change. For cards, we now query the QueryExecution table and use started_at as a proxy for the last viewed timestamp, and can then only grab cards with an execution context of 'question'. Popular: We now want to have a notion of 'popular' items as well, which can look different than 'recents'. This is the first attempt at a reasonable query with scoring system. The score takes into account: - recency of view - count of views total - if the card/dashboard is 'verified' (enterprise) - if the cad/dashboard is inside an 'official' Collection (enterprise) The popular score currently uses only the *current-user-id* to fetch items, so popularity is (in this commit, at least) a per-user concept. * Fixed mistake and renamed endpoint * Typo fix * Clean up QueryExecution in tests * Indent, naming, and some simple defs * try to reduce db calls for :has_question_and_dashboard I've moved the fn out of the hydration system to guarantee that it only runs for a single user, and is only used in `GET /api/user/current` (and not accidentally used in other spots via hydration mechanism). Still don't know how to check Card and Dashboard tables more efficiently * ViewLog and QueryExecution have different timestamps. Can't compare Pushing this for some review, but might want to build a proper way to compare, so that recent cards and recent dashboards/tables are sorted equally * fix namespace sorting * Fix the endpoint name * Sorting mixed date time formats 'works' but not ideal This formats the timestamps into strings and uses those for comparison. Not perfect. Pushing in case people are trying the branch * Use simpler db function * Let views and runs query work with one user or all users Popular_items are an all-users notion, but recent_views applies only to the current user. * Unify view_log.timestamp to query_execution.started_at type these used to both be `DATETIME` in the migration file. Then migration 168 bumped up the type of the view_log.timestamp column: ``` - changeSet: id: 168 author: camsaul comment: Added 0.36.0 changes: - modifyDataType: tableName: query_execution columnName: started_at newDataType: ${timestamp_type} ``` So these were no longer the same on h2 (and possibly mysql). But sorting viewlogs and query_executions would fail. ```clojure activity=> (->> (ViewLog) first :timestamp #_type) "0x6a33e42e" "2022-04-04T21:57:07.471"] activity=> (->> (ViewLog) first :timestamp type) java.time.LocalDateTime activity=> (->> (QueryExecution) first :started_at #_type) "0x7af249ac" "2022-04-04T21:57:07.625738Z"] activity=> (->> (QueryExecution) first :started_at type) java.time.OffsetDateTime ``` The LocalDateTime and OffsetDateTime were not comparable. So make them identical types again. * Bookmarked results should not show up in recents/ popular. This is done in a db query to still try fetch enough items from the db to present to the user. Filtering bookmarked items later may result in an empty list. It's possible that the firt N results are all bookmarked, and then the frontend would have no items to show. Filtering bookmarked results out from the beginning increases the chances of a non-empty result. * :first_login populated with the earliest login timestamp. If there is a first login timestamp, that is used. If one does not exist, we assume it's the first time logging in and use an OffsetDateTime (now). This is possible since the login_history is done off thread, so on a real first login, we might hit the db before the login is logged, resulting in an empty list returned. On subsequent checks, we should see a proper timestamp from the db. Since this is used to check if a user is 'new' (within 7 days of first logging in), the accuracy of the timestamp matters on the order of days, not milliseconds, so this should be ok. * Passing test * Popular_items test Tweak the create-views! function to more consistently order items. And creates views for dashboards/tables (ViewLog) and cards (QueryExecution) in a unified way, meaning we can reliably order views, and write tests more easily. Note that the popular_items test may have to change if the scoring math changes. * Fix e2e test * Fix nit and bug - forgot to remove '0' from the and clause, so we didn't get the expected boolean - popular_items not WIP * another nit * Fix popular items on the frontend Co-authored-by:
Alexander Polyankin <alexander.polyankin@metabase.com> Co-authored-by:
dan sutton <dan@dpsutton.com>
-
- Apr 07, 2022
-
-
adam-james authored
-
- Apr 01, 2022
-
-
Alexander Polyankin authored
-
Ngoc Khuat authored
* add API to fetch general permisisons graph * add API to update general permissionns * change author of migration * update documents * misc fixes to applease the CIs * Add tests for general permission APIs and models * linting and fix a failed test case * fix some failed tests * update docs and change /subscription/ to /general/subscription/ for consistency * Hook and migration to make sure subscription are created for new groups by default * add schema migrations tests * set for the win * address noah's comments * Parse number as is in http-client for tests * address Cam's comments * revert the last commit about parsing API response in tests * change fk name * delete a comment * Changes: - Rename `changes` column to `after` to keep things consistent - If a group has no General Permisions, it'll not be included in the graph - Update tests and some docs * fix failing tests in ee * add some tests and make docstring completes * polishing comments * namespaces * fix namespaces * Add general permisison flags to `GET /api/user/current` (#21250) * return general permisison flags for /api/current * namespaces * move permission flags to under * Enforce Subscription permissions (#21285) * return general permisison flags for /api/current * namespaces * enforce general permissions for subscription and tests * update check-has-general-permisison to add option to require superuser * adding arity * add tests for permissions helper function * Move advanced permissions check funcs to ee namespace * unpushed changes * namespaces * ignore exception when load namespaces * change helper fn name * Enforce Monitoring Permissions (#21321) * return general permisison flags for /api/current * namespaces * enforce general permissions for subscription and tests * update check-has-general-permisison to add option to require superuser * adding arity * enforce permissions to call /api/dataset for internal queries * enforce monitoring permissions for api/task and api/util * add tests for OSS * add tests for db-connection-info endpoint * change test schema * update name func and fix ns * whydon't CI run ? * Enforce Setting Permissions (#21386) * return general permisison flags for /api/current * namespaces * enforce general permissions for subscription and tests * update check-has-general-permisison to add option to require superuser * adding arity * enforce permissions to call /api/dataset for internal queries * enforce monitoring permissions for api/task and api/util * add tests for OSS * add tests for db-connection-info endpoint * change test schema * update name func and fix ns * whydon't CI run ? * Enforce Setting permissions * fix failing test * make sure we could run slack test twice * make the mock consistent * address Noah's comments * shorter permissions check
-
- Mar 31, 2022
-
-
Nick Fitzpatrick authored
Updating Page Title and Favicon as Dashboards load
-
- Mar 29, 2022
-
-
Ngoc Khuat authored
* add API to fetch general permisisons graph * add API to update general permissionns * change author of migration * update documents * misc fixes to applease the CIs * Add tests for general permission APIs and models * linting and fix a failed test case * fix some failed tests * update docs and change /subscription/ to /general/subscription/ for consistency * address noah's comments * Parse number as is in http-client for tests * revert the last commit about parsing API response in tests * change fk name * Changes: - Rename `changes` column to `after` to keep things consistent - If a group has no General Permisions, it'll not be included in the graph - Update tests and some docs * fix failing tests in ee * add some tests and make docstring completes * fix namespaces
-
- Mar 23, 2022
-
-
Howon Lee authored
This is the first vertical slice of #708. There is a material ways to go, including mysql implementation, plinking away at the data model stuff, and frontend. There are also big putative sync speed gains I think I should chip away at.
-
- Mar 21, 2022
-
-
Dalton authored
-
- Mar 17, 2022
-
-
Noah Moss authored
-
- Mar 15, 2022
-
-
Noah Moss authored
-
- Mar 14, 2022
-
-
Cam Saul authored
Upgrade Liquibase to latest version; remove final Java source file and need for `clojure -X:deps prep` (#20611) * Upgrade Liquibase to latest version * Try adjusting log * Fix checksums for the TWO migrations with ID = 32 * FINALLY get Liquibase to use Log4j2 * Set Liquibase ConsoleUIService OutputStream to null OutputStream * Manually define a package for our H2 proxy class so Java 8 works * Fix package-name determination code * Update migrations file spec * `databasechangelog` shouldn't be upper-case * Lower-case quartz table names * More MySQL fixes
* Properties for all the Quartz tables * Formatting tweaks [ci skip] * Revert a few more busted changes * Fix more busted changes * Bump Liquibase version to 4.8.0 to fix MySQL defaultValueBoolean bug * OMG I think I finally fixed MySQL * Remove Java source file and prep-deps code * Remove two more references to bin/prep.sh * Minor cleanup * Revert unneeded changes * Fix busted indentation * Don't search inside java/ anymore since it's G-O-N-E * Appease the namespace linter * Update src/metabase/db/liquibase/h2.clj -
Benoit Vinay authored
* accessors constants created * accessors dependencies removed from all charts * Removed accessors dependencies from BE JS for static viz * Removed accessors references in chart editor * accessors added to defaultProps * Remove accessors in tests * defaultProps removed form static viz components
-
- Mar 09, 2022
-
-
Gustavo Saiani authored
-
- Mar 04, 2022
-
-
Pawit Pornkitprasan authored
-
- Mar 02, 2022
-
-
Dalton authored
-
- Feb 23, 2022
-
-
Cam Saul authored
* Rewrite add-migrated-collections * Add new tests for the new migrations * Truncate tables (removing values added by Liquibase migrations) before copying * Clean namespaces * Use TRUNCATE TABLE ... CASCADE for Postgres copy * Update test * Sort namespaces * Remove stray println * Fix Postgres TRUNCATE TABLE behavior * Disable DB constraints before truncating tables * Fix fussy postgres
-
Cam Saul authored
* Remove MetaBot Group * Fix missing :require * More fixes * Another fix
* change default sandboxed group_id * update group_ids Co-authored-by:Aleksandr Lesnenko <alxnddr@gmail.com>
-
- Feb 18, 2022
-
-
Cam Saul authored
-
- Feb 17, 2022
-
-
Noah Moss authored
-
adam-james authored
* Migrations for the new 'Annotate Useful Events' project Users will be able to create 'events' in timelines, which are accessible via collections. Two migrations are introduced to facilitate the new feature: timeline table, and event table. The following columns are presumed necessary based on the design document: ** TimeLine - id - name - description - collection_id - archived - creator_id - created_at - updated_by ?? - updated_at *** Event - id - timeline_id - name - description markdown (max length 255 for some reason) - date - time is optional - timezone (optional) - icon (set of 6) - archived status - created by - created at - created through: api or UI - modified by ?? - modified at * Changes to events schema - add icon onto timeline - make icon on event nullable - just move to a single timestamp on event, no boolean or anything - rename event table to `timeline_events` since "event" is so generic * dir-locals indentation * Timeline api and model - patched up the migration to make collection_id nullable since that is the root collection - followed the layout of api/metric and did the generic model stuff * Select keys with keywords, not destructured nils :( also, can't return the boolean from the update, but select it again * Enable the automatic updated_at * Boolean for "time matters" and string timezone column on events * clean up migration, rename modified_at -> updated_at for benefit of `:timestamped? true` bits * basic timeline event model namespace * Timeline Events initial API Just beginning the basics for the API for timeline events. - need to find a way to check permissions - have to still check if the endpoint is returning stuff properly * Singularize timeline_event tablename * Timeline events api hooked up * Singularize timeline event api namespace * unused comment * indent spec maps on routes * Make name optional on timeline PUT * Update collection api for timelines endpoints - add /root/timelines endpoint for when collection id is null - add /:id/timelines endpoint Created a hydration function to hydrate the timeline events when fetching timelines. * TimelineEvent permissions - crate a permissions objects set via the timeline's permissions * Move to using new permissions setup previously was explicitly checking permissions of the underlying timeline from the event, but now we just do the standard read/write check on the event, and the permissions set knows it is based on the permission set of the underlying timeline (which is based on the permissions set of the underlying collection) * Items api includes timelines * Strip of icon from other rows * Indices on timeline and timeline_event timeline access by collection_id timeline_event by both timeline_id and (timeline_id, timestamp) for when looking for events within a certain range. We will always have been narrowed by timeline ids (per the collection containing the question, or by manually enabling certain timelines) so we don't need a huge index on the timestamp itself. * Skeleton of range-based query * Initial timeline API tests Began with some simple Auth tests and basic GET tests, using a Collection with an id as well as the special root collection. * Fix docstring on api timeline namespace * Put timeline's events at `:events` not `timeline-events` * Add api/card/:id/timelines At the moment api/card/:id/timelines and api/collection/:id/timelines do the same thing: they return all of the timelines in the collection or in the collection belonging to the card. In the future this may drift so they share an implementation at the moment. * Put creator on timeline on api endpoints for timeline by id and timelines by card and collection * Hydrate creator on events Bit tricky stuff here. The FE wants the events at "events" not "timeline-events" i guess due to the hyphen that js can never quite be sure isn't subtraction. To use the magic of nested hydration `(hydrate timelines [:events :creator])`, the events have to be put at the declared hydration key. I had wanted the hydration `:timeline-events` for clarity in the code but want `:events` to be the key. But when you do this, the hydration thinks it didn't do any work and cannot add the creator. So the layout of the datastructure is tied to the name of the hydration key. Makes sense but I wish I had more control since `:events` is so generic. * Add include param check to allow timeline GET to hydrate w/ events. The basic check is in place for include=events in the following ways: GET /api/timeline/:id?include=events GET /api/collection/root/timelines?include=events If include param is omitted, timelines are hydrated only with the creator. Events are always hydrated with creator. * fix hyphen typo in api doc * Change schema for include=events to s/enum to enforce proper param Had used just a s/Str validation, which allows the include parameter value to be anything, which isn't great. So, switched it to an enum. Can always change this later to be a set of valid enums, similar to the `model` parameter on collection items. * Fixed card/:id/timelines failing w/ wrong arity The `include` parameter is passed to `timeline/timelines-for-collection` to control hydration of the events for that timeline. Since the card API currently calls this function too, it requires that parameter be passed along. As well, I abstracted the `include-events-schema` and have it in metabase.api.timeline. Not married to it, but since the schema was being used across timeline, collection, and card API, it seems like a good idea to define it in one place only. Not sure if it should be metabase.api.timeline or metabase.models.timeline * used proper migration numbers (claimed in migrations slack channel) * Add archived=true functionality There's one subtle issue remaining, and that is that when archived=true, :creator is not hydrated on the events. I'll look into fixing that. In the meantime, the following changes are made: - :events returs empty list `[]` instead of `null` when there are no events in a timeline - archived events can be fetched via `/api/timeline/:id?include=events&archived=true` - similarly, archived events can be fetched with: - `/api/collection/root|:id/timelines?include=events&archived=true` - `/api/card/:id/timelines?include=events&archived=true` Just note the caveat for the time being (no creator hydrated on events when fetching archived) Fix pending. * Altered the hydration so creator is always part of a hydrated event Adjusted the hydration of timelines as follows: - hydration always grabs all events - at the endpoint's implementation function, the timeline(s) are altered by filtering on archived true or false Not sure if this is the best approach, but for now it should work * Create GET endpoint for /api/timeline and allow archived=true Added a missed GET endpoint for fetching all the timelines (optionally fetch archived) * reformat def with docstring * Timeline api updated to properly filter archived/un-archived Use archived=true pattern in the query params for the timeline endpoint to allow FE to get what they need. Work in progress on the API tests too. * Timeline-events API tests Timeline Events API endpoint tests, at least the very basics. * Timeline Hydration w/ Events tests Added a test for Events hydration in the timeline API test namespace. * TimelineEvent Model test NS May be a bit unnecessary to test the hydrate function, but the namespace is there in case we need to add more tests over time. Also adjusted out a comment and added a library to timeline_test ns * Added metabase.models.timeline-test NS Once again, this may be a bit unnecessary as a test ns for now, but could be the right place to add tests if the feature grows in the future. * Clean up handling of archived - we only want to show archived events when viewing a timeline by id and specifically asking for archived events. Presumably this is some administrative screen. We don't want to allow all these options when viewing a viz as its just overload and they are archived for a reason. If FE really want them they can go by id of the timeline. Note it will return all events, not just the archived ones. - use namespaced keywords in the options map. Getting timelines necessarily requires getting their events sometimes. And to avoid confusion about the archived state, we have options like `:timeline/events?`, `:timeline/archived?`, `:events/start`, `:events/end`, and `:events/all?` so they are all in one map and not nested or ambiguous. * Clean up some tests noticable differences: timelines with archived=true return all events as if it is "include archived" rather than only show archived. * PUT /api/timeline/:id now archives all events when TL is archived We can archive/un-archive timelines, and all timeline events associated with that timeline will follow suit. This does lose state when, for example, there is a mix of archived/un-archived events, as there is no notion of 'archived-by' to check against. But, per discussion in the pod-discovery slack channel, this is ok for now. It is also how we handle other archiving scenarios anyway, with items on collections being archived when a collection is archived. * cleanup tests * Include Timeline and TimelineEvent in models * Add tt/WithTempDefaults impls for Timeline/TimelineEvent lets us remove lots of the `(merge defaults {...})` that was in the code. Defaults are ```clojure Timeline (fn [_] {:name "Timeline of bird squawks" :creator_id (rasta-id)}) TimelineEvent (fn [_] {:name "default timeline event" :timestamp (t/zoned-date-time) :timezone "US/Pacific" :time_matters true :creator_id (rasta-id)}) ``` * Add timeline and timeline event to copy infra * Timeline Test checking that archiving a Timeline archives its events A Timeline can be archived and its events should also be archived. A Timeline can also be unarchived, which should also unarchive the events. * Docstring clarity on apis * Remove commented out form * Reorder migrations * Clean ns clj-kondo ignores those but our linter does not * Correct casing on api schema for include * Ensure cleanup of timeline from test * DELETE for timeline and timeline-event * Poison collection items timeline query when is_pinned timelines have no notion of pinning and were coming back in both queries for pinned and not pinned. This adds a poison clause 1=2 when searching for pinned timelines since none are but they don't have a column saying that. * Clean up old comment and useless tests comment still said to poison the query when getting pinned state and that function had been added. Tests were asserting count = 2 and also the set of names had two things in them. Close enough since there are no duplicate names * Use TemporalString schema on timeline event routes Co-authored-by:
dan sutton <dan@dpsutton.com>
-
Cam Saul authored
* Rewrite add-databases-to-magic-permissions-groups * Add a test for v0.43.00-007 * Small test tweak.
-
- Feb 16, 2022
-
-
Cam Saul authored
* Rework add-admin-group-root-entry migration * Add test for the migration * Dump/load should not copy the root perms entry for the admin group
-
Cam Saul authored
* Remove add-users-to-default-permissions-groups data migration * Revert changes to Cypress tests * Revert unneeded change
-
- Feb 15, 2022
-
-
Cam Saul authored
* Replace metabase.db.migrations/copy-site-url-setting-and-remove-trailing-slashes with a Liquibase migration * Tweak comment * `setting`, not `settings` * Clean namespace * `key` has to be quoted for MySQL * Add tests for migrations v0.43.00-008 and v0.43.00-009 * fix new migration for MySQL 5.7
-
Cam Saul authored
* Replace metabase.db.migrations/migrate-humanization-setting with a Liquibase migration * Apparently key has to be quoted these days in MySQL/MariaDB * Need to quote key in the preCondition as well * Fix merge conflicts
-
Cam Saul authored
* Replace metabase.db.migrations/mark-category-fields-as-list with a Liquibase migration * Fix stray comma
-