Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Jul 25, 2022
    • adam-james's avatar
      Add a check to PUT /user/:id to disallow name edits if an SSO user (#23752) · 3795b56c
      adam-james authored
      * Add a check to PUT /user/:id to disallow name edits if an SSO user
      
      * Clean up After SAML SSO tests
      
      The `:sso_source` key is set for the Rasta user in some SAML tests, but is expeted to be nil in subsequent tests, so
      we clean up in the SAML test ns.
      
      * Add a test to ensure SSO user names can't be changed via API
      
      * Missed a change I had made while adjusting tests
      
      * valid-name-update? take 1 name, to allow better error messages
      
      Let's the user know that first or last name is the cause of a problem, rather than just 'names'.
      
      * Remove unneeded thread macro
      
      * Use partial=
      
      * slight change to local fn for reusability
      Unverified
      3795b56c
    • dpsutton's avatar
      Ignore fields in joins when hoisting joined fields (#24167) · 0dc38ac3
      dpsutton authored
      * Ignore fields in joins when hoisting joined fields
      
      Need to understand two sides of mbql to understand this.
      
      ```clojure
      (defn nest-expressions
        "Pushes the `:source-table`/`:source-query`, `:expressions`, and `:joins` in the top-level of the query into a
        `:source-query` and updates `:expression` references and `:field` clauses with `:join-alias`es accordingly. See
        tests for examples. This is used by the SQL QP to make sure expressions happen in a subselect."
        ...)
      ```
      
      Make a query against the orders table joined to the people table and
      select order id, order total, and the person's email:
      
      ```sql
       SELECT "PUBLIC"."orders"."id"    AS "ID",
             "PUBLIC"."orders"."total" AS "TOTAL",
             "People - User"."email"   AS "People - User__EMAIL"
      FROM   "PUBLIC"."orders"
             LEFT JOIN "PUBLIC"."people" "People - User"
                    ON "PUBLIC"."orders"."user_id" = "People - User"."id"
      LIMIT  1048575
      ```
      
      Now add a custom column called adjective, which is 'expensive' when
      total > 100 else 'cheap'
      ```sql
       SELECT "source"."id"                   AS "ID",
             "source"."total"                AS "TOTAL",
             "source"."adjective"            AS "adjective",
             "source"."people - user__email" AS "People - User__EMAIL"
      FROM   (SELECT "PUBLIC"."orders"."id"         AS "ID",
                     "PUBLIC"."orders"."user_id"    AS "USER_ID",
                     "PUBLIC"."orders"."product_id" AS "PRODUCT_ID",
                     "PUBLIC"."orders"."subtotal"   AS "SUBTOTAL",
                     "PUBLIC"."orders"."tax"        AS "TAX",
                     "PUBLIC"."orders"."total"      AS "TOTAL",
                     "PUBLIC"."orders"."discount"   AS "DISCOUNT",
                     "PUBLIC"."orders"."created_at" AS "CREATED_AT",
                     "PUBLIC"."orders"."quantity"   AS "QUANTITY",
                     CASE
                       WHEN "PUBLIC"."orders"."total" > 50 THEN 'expensive'
                       ELSE 'cheap'
                     end                            AS "adjective",
                     "People - User"."email"        AS "People - User__EMAIL",
                     "People - User"."id"           AS "People - User__ID"
              FROM   "PUBLIC"."orders"
                     LEFT JOIN "PUBLIC"."people" "People - User"
                            ON "PUBLIC"."orders"."user_id" = "People - User"."id")
             "source"
      LIMIT  1048575
      ```
      
      We put the "work" in a nested query and then just update the outer
      select to select `source.total` instead of `orders.total`. But the way
      this figured out which joins to hoist up was a bit broken. It walked all
      over the inner query, finding fields it was interested in. However, it
      also got fields it shouldn't have, by descending into join information
      that should be opaque at this level.
      
      In the repro for this, we make a card Q1, selecting product_id and
      count, but with an implicit join to products and filtering where
      category = doohickey.
      
      ```clojure
      ;; card Q1
      {:type :query,
       :query {:source-table 4,     #_ reviews
               :filter [:= [:field 26 {:source-field 33}] "Doohickey"], #_ products.category
               :aggregation [[:count]],
               :breakout [[:field 33 nil]],  #_ reviews.product_id
               :limit 2},
       :database 1}
       ```
      
      A second question Q2 queries the Orders table and joins to Q1 on
      orders.product_id = q1.product_id, and adds a custom column `+ 1 1`.
      ```
      ;; card Q2, based on Q1
      {:source-table 2,
       :expressions {"CC" [:+ 1 1]},
       :fields [[:field 9 nil]
                [:field 3 nil]
                [:field 5 nil]
                [:field 6 nil]
                [:field 8 nil]
                [:field 7 nil]
                [:field 1 nil]
                [:field 4 {:temporal-unit :default}]
                [:field 2 nil]
                [:expression
                 "CC"
                 {:metabase.query-processor.util.add-alias-info/desired-alias "CC",
                  :metabase.query-processor.util.add-alias-info/position 9}]
                [:field 33 nil]
                [:field "count" {:base-type :type/BigInteger}]],
       :joins [{:alias "Question 4918",
                :strategy :left-join,
                :fields [[:field 33 nil]
                         [:field "count" {:base-type :type/BigInteger}]],
                :condition [:=
                            [:field 5 nil]
                            [:field 33 nil]],
                :source-card-id 4918,
                }]}
      ```
      
      and this should yield the sql:
      
      ```sql
      SELECT "source"."ID" AS "ID",
             "source"."PRODUCT_ID" AS "PRODUCT_ID",
             "source"."TOTAL" AS "TOTAL",
             "source"."CC" AS "CC",
             "source"."Question 4918__PRODUCT_ID" AS "Question 4918__PRODUCT_ID",
             "source"."Question 4918__count" AS "Question 4918__count"
      FROM (
        SELECT "PUBLIC"."ORDERS"."ID" AS "ID",
               "PUBLIC"."ORDERS"."USER_ID" AS "USER_ID",
               "PUBLIC"."ORDERS"."PRODUCT_ID" AS "PRODUCT_ID",
               "PUBLIC"."ORDERS"."SUBTOTAL" AS "SUBTOTAL",
               "PUBLIC"."ORDERS"."TAX" AS "TAX",
               "PUBLIC"."ORDERS"."TOTAL" AS "TOTAL",
               "PUBLIC"."ORDERS"."DISCOUNT" AS "DISCOUNT",
               "PUBLIC"."ORDERS"."CREATED_AT" AS "CREATED_AT",
               "PUBLIC"."ORDERS"."QUANTITY" AS "QUANTITY",
               (1 + 1) AS "CC",
               "Question 4918"."PRODUCT_ID" AS "Question 4918__PRODUCT_ID",
               "Question 4918"."count" AS "Question 4918__count"
        FROM "PUBLIC"."ORDERS"
        LEFT JOIN (
          SELECT "PUBLIC"."REVIEWS"."PRODUCT_ID" AS "PRODUCT_ID",
                 count(*) AS "count"
          FROM "PUBLIC"."REVIEWS"
          LEFT JOIN "PUBLIC"."PRODUCTS" "PRODUCTS__via__PRODUCT_ID"
            ON "PUBLIC"."REVIEWS"."PRODUCT_ID" = "PRODUCTS__via__PRODUCT_ID"."ID"
          WHERE "PRODUCTS__via__PRODUCT_ID"."CATEGORY" = 'Doohickey'
          GROUP BY "PUBLIC"."REVIEWS"."PRODUCT_ID"
          ORDER BY "PUBLIC"."REVIEWS"."PRODUCT_ID" ASC) "Question 4918"
        ON "PUBLIC"."ORDERS"."PRODUCT_ID" = "Question 4918"."PRODUCT_ID") "source"
      LIMIT 1048575
      ```
      
      But when it was looking through to see which fields to hoist top level,
      it was also finding `"PRODUCTS__via__PRODUCT_ID"."ID"` and
      `"PRODUCTS__via__PRODUCT_ID"."CATEGORY"` as fields it should hoist up
      because it searched the whole tree underneath it which includes join
      conditions and such.
      
      But of course that doesn't matter, the only thing is really analyzing
      what fields come out of a query, and those fields do not come out of the
      nested query
      
      ```sql
          SELECT "PUBLIC"."REVIEWS"."PRODUCT_ID" AS "PRODUCT_ID",
                 count(*) AS "count"
          FROM "PUBLIC"."REVIEWS"
          LEFT JOIN "PUBLIC"."PRODUCTS" "PRODUCTS__via__PRODUCT_ID"
            ON "PUBLIC"."REVIEWS"."PRODUCT_ID" = "PRODUCTS__via__PRODUCT_ID"."ID"
          WHERE "PRODUCTS__via__PRODUCT_ID"."CATEGORY" = 'Doohickey'
          GROUP BY "PUBLIC"."REVIEWS"."PRODUCT_ID"
          ORDER BY "PUBLIC"."REVIEWS"."PRODUCT_ID" ASC) "Question 4918"
      ```
      
      that only returns product_id and count.
      
      And this matches the error it was (helpfully) throwing:
      
      ```clojure
      investigation=> (qp/compile nested-query)
      Execution error (ExceptionInfo) at metabase.query-processor.util.add-alias-info/field-source-table-alias (add_alias_info.clj:182).
      Cannot determine the source table or query for Field clause
      [:field
       26 #_ products.category
       {:source-field 33, #_reviews.product_id
        :join-alias "PRODUCTS__via__PRODUCT_ID",
        :metabase.query-processor.util.add-alias-info/source-table "PRODUCTS__via__PRODUCT_ID",
        :metabase.query-processor.util.add-alias-info/source-alias "CATEGORY"}]
      ```
      
      And here's the query it was analyzing when determining which fields it
      needed to hoist (looking for ones with `:join-alias` (which is also in
      the nest_query_test.clj file):
      
      ```clojure
      ;; removed some cruft and extra field information for brevity
      {:source-table 2,
       :expressions  {"CC" [:+ 1 1]},
       :fields       [[:field 33 {:join-alias "Question 4918",}]
                      [:field "count" {:join-alias "Question 4918"}]]
       :joins        [{:alias           "Question 4918",
                       :strategy        :left-join,
                       :fields          [[:field 33 {:join-alias "Question 4918"}]
                                         [:field
                                          "count"
                                          {:join-alias "Question 4918"}]]
                       :condition       [:=
                                         [:field 5 nil]
                                         [:field 33 {:join-alias "Question 4918",}]],
                       :source-card-id  4918,
                       :source-query    {:source-table 4,
                                         ;; nested query has filter values with join-alias that should not
                                         ;; be selected
                                         :filter       [:=
                                                        [:field 26 {:join-alias "PRODUCTS__via__PRODUCT_ID"}]
                                                        [:value "Doohickey" {}]],
                                         :aggregation  [[:aggregation-options
                                                         [:count]
                                                         {:name "count"}]],
                                         :breakout     [[:field 33 nil]],
                                         :limit        2,
                                         :order-by     [[:asc
                                                         [:field 33 nil]]],
                                         ;; nested query has an implicit join with conditions that should
                                         ;; not be selected
                                         :joins        [{:alias        "PRODUCTS__via__PRODUCT_ID",
                                                         :strategy     :left-join,
                                                         :condition    [:=
                                                                        [:field 33 nil]
                                                                        [:field
                                                                         30
                                                                         {:join-alias "PRODUCTS__via__PRODUCT_ID"}]]
                                                         :source-table 1,
                                                         :fk-field-id  33}]},
                       :source-metadata [{:field_ref [:field 33 nil]}
                                         {:field_ref [:aggregation 0]}]}]}
      ```
      
      * Add round-trip test through qp
      
      This test is essentially the repro from the ticket
      https://github.com/metabase/metabase/issues/20809
      
      ```clojure
      debug-qp=> (to-mbql-shorthand (:dataset_query (metabase.models/Card 4919)))
      (mt/mbql-query
       orders
       {:joins [{:source-table "card__4918",
                 :alias "Question 4918",
                 :condition [:=
                             $product_id
                             [:field
                              %reviews.product_id
                              {:join-alias "Question 4918"}]],
                 :fields :all}],
        :expressions {"CC" [:+ 1 1]}})
      debug-qp=> (to-mbql-shorthand (:dataset_query (metabase.models/Card 4918)))
      (mt/mbql-query
       reviews
       {:breakout [$product_id],
        :aggregation [[:count]],
        :filter [:= $product_id->products.category "Doohickey"]})
      ```
      
      Thanks to @cam for providing such a lovely tool
      Unverified
      0dc38ac3
  2. Jul 23, 2022
  3. Jul 22, 2022
  4. Jul 21, 2022
  5. Jul 20, 2022
  6. Jul 18, 2022
  7. Jul 15, 2022
    • metamben's avatar
      Retain source alias for aggregates (#23771) · 6858dcef
      metamben authored
      Unverified
      6858dcef
    • dpsutton's avatar
      Check email test errors for javax.mail.AuthenticationFailedException (#23921) · 52336b97
      dpsutton authored
      * Check email test errors for javax.mail.AuthenticationFailedException
      
      Authing with Office365 led to a less-than-great user experience. If you
      type in the wrong username or password, we would display an unhelpful
      message:
      
      > Sorry, something went wrong. Please try again. Error::Exception
      reading response.Read timed out.
      
      This is an error from an inner nested exception which looks like:
      ```clojure
      (javax.mail.AuthenticationFailedException.
       "" ;; Office365 returns auth exception with no message so we only saw "Read timed out" prior
       (javax.mail.MessagingException.
        "Exception reading response"
        (java.net.SocketTimeoutException. "Read timed out")))
      ```
      
      We didn't get any message from the AuthenticationFailedException so our
      humanization strategy only went on the underlying "Read timed out". Now
      we can check against the error class and use that information.
      
      * Little bit of clarity
      
      better function name (`check` -> `match-error`), better arg name (
      `regex|exception-class` -> `regex-or-exception-class`), full argument
      names `m` -> `message` and `es` -> `exceptions`
      Unverified
      52336b97
    • dpsutton's avatar
      Handle email password obfuscation (#23925) · 5e49508d
      dpsutton authored
      The frontend receives obfuscated values for settings marked sensitive:
      
      ```clojure
      (defsetting email-smtp-password
        (deferred-tru "SMTP password.")
        :sensitive? true)
      ```
      
      and over the wire:
      ```javascript
          {
              "key": "email-smtp-password",
              "value": "**********ig",
              "is_env_setting": false,
              "env_name": "MB_EMAIL_SMTP_PASSWORD",
              "description": "SMTP password.",
              "default": null
          },
      ```
      
      So the frontend form never has a valid password in it. When you edit
      email settings, we check if those are valid, and if so, commit the
      settings. But with the wrong password email settings are almost always
      wrong. You have to re-type the email password just to change other
      settings, like the friendly name, reply-to email address, etc.
      
      So we recognize that we've received the obfuscated value, swap in the
      real value for the email connection test, and then obfuscate the
      password in the response. If we do not receive an obfuscated value, just
      leave it alone and return the value anyways (they typed it in, so seems
      safe to send it back).
      
      Note that there is a function in models.setting called
      `obfuscated-value?` that checks strings against a regex of
      `#"^\*{10}.{2}$"`. This is used to never set settings to an obfuscated
      value. But its a bit less sensitive than our purposes need. We have a
      real value and an obfuscated value so we can check if the obfuscated
      value is based on the real value. (obfuscate-value "password") ->
      "**********rd" . Whereas we might recognize "**********AA" as the
      obfuscated value if we reused that helper function.
      
      NOTE this could be weird if anyone's password changes from "password" to
      "**********rd" but the chances of that happening are miniscule.
      
      Had thought to have the FE not send a value at all if it is unchanged
      but this had some far-reaching implications that we don't want to tackle
      so close to the release. There's an associated issue for LDAP settings
      that is largely the same as this:
      https://github.com/metabase/metabase/issues/16708 But it is not clear to
      me if these are the only two places or if there could be others. Until
      that research is done we can just accept this patch as is and come up
      with a systematic approach in the future.
      
      But it does seem only three settings are sensitive:
      ```clojure
      setting=> (->> (vals @registered-settings)
                     (filter :sensitive?)
                     (map :name))
      (:saml-keystore-password :email-smtp-password :ldap-password)
      ```
      
      The direct setting apis won't write setting values which appear as
      obfuscated, but we have other endpoints that take collections of
      settings as a unit (for instance, the endpoint /api/email that takes all
      of the email settings here to test if they work together).
      Unverified
      5e49508d
    • dpsutton's avatar
      Async card metadata (#23672) · 21aa0053
      dpsutton authored
      
      * Lets use the main thread
      
      * Strip out channel stuff and rename
      
      * 202 -> 200 response
      
      When returning a channel we return a 202. A map is just a 200. Since we
      no longer need to have the main stuff async (as opposed to the metadata
      stuff) we can just return the map with a 200 instead of this long
      running channel stuff and a 202.
      
      * Last test
      
      * renames, logging, ensure query is the same before saving metadata
      
      * Sandbox test 202 -> 200
      
      * Another 202 -> 200
      
      * Put timeout on async metadata saving
      
      timeout of 15 minutes before we give up on the async metadata saving. It
      is possible this cuts things off but hard to tell if work is still being
      done at that point.
      
      * outdated comment
      
      * Return json error message, not text/plain
      
      this is a subtle one that I'm not happy about. Our error handling will
      return text/plain if you throw an `(ex-info "something" {:status-code
      400})`.
      
      ```shell
      ❯ http post localhost:3000/api/timeline-event/ name=some-name description=Bob timestamp=2022 timezone=America/Central time_matters:=false timeline_id:=1629  Cookie:$COOKIE
      HTTP/1.1 404 Not Found
      Content-Type: text/plain
      
      Timeline with id 1,629 not found
      ```
      
      But if you add extra information to the map to the ex-info, you get
      json!
      
      ```clojure
      (defmethod api-exception-response Throwable
        [^Throwable e]
        (let [{:keys [status-code], :as info} (ex-data e)
              other-info                      (dissoc info :status-code :schema :type)
              body                            (cond
                                                (and status-code (empty? other-info))
                                                ;; If status code was specified but other data wasn't, it's something like a
                                                ;; 404. Return message as the (plain-text) body.
                                                (.getMessage e)
      
                                                ;; if the response includes `:errors`, (e.g., it's something like a generic
                                                ;; parameter validation exception), just return the `other-info` from the
                                                ;; ex-data.
                                                (and status-code (:errors other-info))
                                                other-info
      
                                                ;; Otherwise return the full `Throwable->map` representation with Stacktrace
                                                ;; and ex-data
                                                :else
                                                (merge
                                                 (Throwable->map e)
                                                 {:message (.getMessage e)}
                                                 other-info))]
          {:status  (or status-code 500)
           :headers (mw.security/security-headers)
           :body    body}))
      ```
      
      So this fix is a _very_ subtle way to get to what we want, although it
      does add a bunch of extra junk to our response.
      
      ```javascript
      {
       ...
       "message": "Invalid Field Filter: Field 26 \"PRODUCTS\".\"CATEGORY\"
      belongs to Database 1 \"Sample Database\", but the query is against
      Database 990 \"copy of sample dataset\"",
       "data": {
         "status-code": 400,
         "query-database": 990,
         "field-filter-database": 1}
       ...
      }
      ```
      
      Reminder of what we want: the frontend is saving a card. We added a
      field filter on a database and then changed the source database of the
      query. So the backend needs to reject the field filter as being on the
      wrong db. The FE expects a json response with a message and will then
      show that message in the save/edit modal.
      
      Why did this come up now: these endpoints for saving and editing a card
      used to always return 202 streaming responses. This means they would
      have to tuck any errors inside of an already open 202 response. Which is
      why you get a message as a json response. But now that they are sync,
      the api can just return a proper 400 with a reason. But we still want
      that to be a json response for the FE.
      
      * error layout fix
      
      * Several Cleanups
      
      - make sure numbers have units at the end (-ms)
      - use (u/minutes->ms 15) rather than a more opaque (* 15 60 1000)
      - move the scheduled metadata saving into its own function
      
      This last bit is actually a bit more than that. I was previously
      throwing everything in a thread submitted to the pooled executor. I'm
      now using a `future` but with a caveat: All of the waiting for the
      timeout, checking if we got metadata is done in a simple `a/go` block
      now, and the thread does just the last bit of IO. We have to select the
      card as it is now to ensure the query is the same and then save.
      
      Refresher on why we have to check if the query is the same. We have up
      to 15 minutes to wait for the query metadata to come back. Completely
      possible for them to have edited the query and then our metadata is
      useless.
      
      Co-authored-by: default avatarAleksandr Lesnenko <alxnddr@gmail.com>
      Unverified
      21aa0053
    • dpsutton's avatar
      Fix constantly checking token on error (#23815) · e7b4dc62
      dpsutton authored
      * Fix constantly checking token on error
      
      We were catching `clojure.lang.ExceptionInfo` which only catches a small
      subset of errors and not any related to what might be thrown in network
      calls.
      
      The reason this affected us is that the cache will cache this value but
      keep trying what is throwing the error. So each time we look at token
      features it attempts again and throws an error.
      
      If we instead do what the code _meant_ to do and we return a value
      indicating there was an error, we are all good.
      
      Example:
      
      ```clojure
      premium-features=> (defn is-odd? [x]
                           (println "computing for " x)
                           (if (odd? x) true (throw (ex-info "boom" {}))))
      premium-features=> (def modd (memoize/ttl is-odd? :ttl/threshold 30000))
      premium-features=> (modd 3)
      computing for  3
      true
      premium-features=> (modd 5)
      computing for  5
      true
      premium-features=> (modd 3)
      true
      premium-features=> (modd 2)
      computing for  2
      Execution error (ExceptionInfo) at metabase.public-settings.premium-features/is-odd? (REPL:289).
      boom
      premium-features=> (modd 2)
      computing for  2
      Execution error (ExceptionInfo) at metabase.public-settings.premium-features/is-odd? (REPL:289).
      boom
      ```
      
      Note that `(modd 2)` keeps attempting to compute for 2 since it throws
      an error rather than returning a value indicating an error.
      
      When the token check throws an error:
      
      ```clojure
      premium-features=> (fetch-token-status (apply str (repeat 64 "b")))
      Execution error (UnknownHostException) at java.net.Inet6AddressImpl/lookupAllHostAddr (Inet6AddressImpl.java:-2).
      store.metabase.com: nodename nor servname provided, or not known
      ```
      
      When the token check returns a value indicating an error:
      
      ```clojure
      premium-features=> (fetch-token-status (apply str (repeat 64 "b")))
      {:valid false,
       :status "Unable to validate token",
       :error-details "store.metabase.com: nodename nor servname provided, or not known"}
      ```
      
      (both instances were with wifi turned off locally)
      
      * add test
      
      * Assert we only call the api once when it throws an error
      
      before this PR we would make a call for each setting (perhaps
      more). Each time trying to hit the endpoint. Now it catches those errors
      and reports not a valid token
      Unverified
      e7b4dc62
    • Ngoc Khuat's avatar
      Store linked-filter fieldvalues (#23699) · e8d98d95
      Ngoc Khuat authored
      * first pass storing linked-filter fieldvalues
      
      * limit linked-filter to not explode
      
      * no check perms for fields when getting params values on dashboard
      Unverified
      e8d98d95
  8. Jul 14, 2022
  9. Jul 13, 2022
    • Cal Herries's avatar
      Session activity timeout (#23349) · 0defe7f8
      Cal Herries authored
      
      * logout when session expires, login when session appears
      
      * add setting UI
      
      * Add last_activity column to session table
      
      * Start implementing session middleware to check for expired sessions
      
      * Change last_activity field to include timezone offset
      
      * Update session middleware to check user activity timeout
      
      * Update last_activity after checking the timeout, or not at all if the setting is nil
      
      * Move session-timeout settings to server.middleware.session
      
      * Handcode timeout for testing
      
      * Fix migrations validation error
      
      * Fix whitespace
      
      * Change session timeout to use metabase.TIMEOUT cookie with expiry
      
      * Remove migration for last_activity column on session table
      
      * Revert changes to logout endpoint
      
      * Revert change to Session model pre-update
      
      * Remove tap>
      
      * Fix tests to include cookie value
      
      * Fix timeout when user is logged out. Timeout loop should only start when a user is logged in
      
      * Update comment and date format
      
      * Store the session-timeout setting as json and convert it to seconds on the fly
      
      * Set zoned date time to use GMT instead of default time zone
      
      * Refactor for testing
      
      * refactor session listener (#23686)
      
      * remove old session listener
      
      * Clear the timeout cookie when user signs out
      
      * Clear session cookie if the timeout cookie expires
      
      * fe tweaks
      
      * Update expires attribute for session and timeout cookies together
      
      * Reapply minimum limit on session-timeout
      
      * Rename functions and fix lint warnings
      
      * Fix resetting session-timeout
      
      * Fix sign out
      
      * Fix tests
      
      * Whitespace
      
      * Get full-app-embeds working
      
      * Add test for embedded session
      
      * session timeout ui tweaks
      
      * fix security issue
      
      * Fix test
      
      * Fix tests
      
      * Do not redirect to "/" if there isn't any redirect URL
      
      * Add test for session-cookies setting
      
      * Fix bug when toggling off timeout and adjust tests
      
      Co-authored-by: default avatarAleksandr Lesnenko <alxnddr@gmail.com>
      Co-authored-by: default avatarAleksandr Lesnenko <alxnddr@users.noreply.github.com>
      Unverified
      0defe7f8
    • Cam Saul's avatar
      Remove `u/optional` (#23877) · b603c439
      Cam Saul authored
      * Remove u/optional
      
      * Spec => `s/` and Schema => `schema/`
      
      * Fix docstring generation.
      
      * Another test fix :wrench:
      
      * Remove unused namespaces
      
      * Parallelize some of the defendpoint tests
      Unverified
      b603c439
    • dpsutton's avatar
      Handle archived and models reverted to regular questions (#23738) · 7caca2ec
      dpsutton authored
      * Handle archived and models reverted to regular questions
      
      Should not refresh these and should unpersist them
      
      Updated the signature of `refresh!` in the `Refresher` protocol to take
      a card instead of a dataset query so that the tests could use the card's
      id for checks. The `dispatching-refresher` then calls the
      `ddl.i/refresh!` multimethod with the dataset_query from the card so its
      a very surface level refactoring.
      
      Updated the stats from refreshing to be a bit smarter. It used to have a
      `when` block for its check against the model's state. this meant on a
      state change that prevented refreshing it would throw away accumulated
      stats. Whoops. Now it records that as a skip. The check itself has grown
      to check archived and dataset status of the underlying card.
      
      * Block api/card/card-id/refresh from refreshing archived/unmodeled
      
      From page http://localhost:3000/admin/tools/model-caching we have a list
      of PersistedInfo records and a way to refresh them. We've already
      prevented the scheduled tasks from refreshing archived/unmodeled
      questions and now the api won't let a user manually do it either.
      
      * ns linter demands allegiance
      
      * Include card archived and dataset information on api/persist
      
      Include information so that the frontend can disallow refreshing models
      based on cards that are archived or no longer archived. The response
      from `http://localhost:3000/api/persist?limit=20&offset=0`
      
       now includes
      two new attributes on the items in the data array: `"card_archived"` and
      `"card_dataset"`. I'm happy to rename if desired.
      
      These are present so that if `card_archived=true` or
      `card_dataset=false` we should not allow refreshing the persisted
      model. The API will already reject this request with a 400 so we should
      let the user know why this is no longer valid.
      
      I'm happy to leave them as the regular property names `archived` and
      `dataset` but put them with the `card_` prefix so it is clear that these
      are not properties of a PersistedInfo but the card that the persisted
      info references
      
      ```
      {
        "data": [
          {
            "definition": {
              "table-name": "model_3_model_from",
              "field-definitions": [
                { "field-name": "id", "base-type": "type/Integer" },
                { "field-name": "total", "base-type": "type/Float" },
                { "field-name": "quantity", "base-type": "type/Integer" }
              ]
            },
            "creator": null,
            "schema_name": "metabase_cache_944b9_2",
            "database_id": 2,
            "collection_id": null,
            "database_name": "sample dataset pg",
            "state": "persisted",
            "refresh_begin": "2022-07-07T14:22:51.512016Z",
            "refresh_end": "2022-07-07T14:22:51.52032Z",
            "collection_name": null,
            "collection_authority_level": null,
            "creator_id": null,
            "card_archived": true, <---- new
            "active": true,
            "id": 3,
            "card_dataset": true,  <---- new
            "card_id": 3,
            "error": null,
            "next-fire-time": null,
            "table_name": "model_3_model_from",
            "card_name": "model from orders archived"
          },
          {
            "definition": {
              "table-name": "model_2_model_from",
              "field-definitions": [
                { "field-name": "id", "base-type": "type/Integer" },
                { "field-name": "total", "base-type": "type/Float" },
                { "field-name": "quantity", "base-type": "type/Integer" }
              ]
            },
            "creator": null,
            "schema_name": "metabase_cache_944b9_2",
            "database_id": 2,
            "collection_id": null,
            "database_name": "sample dataset pg",
            "state": "persisted",
            "refresh_begin": "2022-07-07T14:22:33.509687Z",
            "refresh_end": "2022-07-07T14:22:33.517815Z",
            "collection_name": null,
            "collection_authority_level": null,
            "creator_id": null,
            "card_archived": false,  <---- new
            "active": true,
            "id": 2,
            "card_dataset": false,   <---- new
            "card_id": 2,
            "error": null,
            "next-fire-time": null,
            "table_name": "model_2_model_from",
            "card_name": "model from orders no longer model"
          },
          {
            "definition": {
              "table-name": "model_1_model_from",
              "field-definitions": [
                { "field-name": "id", "base-type": "type/Integer" },
                { "field-name": "total", "base-type": "type/Float" },
                { "field-name": "quantity", "base-type": "type/Integer" }
              ]
            },
            "creator": null,
            "schema_name": "metabase_cache_944b9_2",
            "database_id": 2,
            "collection_id": null,
            "database_name": "sample dataset pg",
            "state": "persisted",
            "refresh_begin": "2022-07-07T14:22:33.476072Z",
            "refresh_end": "2022-07-07T14:22:33.504025Z",
            "collection_name": null,
            "collection_authority_level": null,
            "creator_id": null,
            "card_archived": false, <---- new
            "active": true,
            "id": 1,
            "card_dataset": true,   <---- new
            "card_id": 1,
            "error": null,
            "next-fire-time": null,
            "table_name": "model_1_model_from",
            "card_name": "model from orders"
          }
        ],
        "total": 3,
        "limit": 20,
        "offset": 0
      }
      ```
      
      * Add conditions to checkCanRefreshModelCache (#23778)
      
      Co-authored-by: default avatarGustavo Saiani <gustavo@poe.ma>
      Unverified
      7caca2ec
    • Braden Shepherdson's avatar
    • Ngoc Khuat's avatar
  10. Jul 12, 2022
    • Noah Moss's avatar
      First pass at parameters in Markdown cards (#23641) · e136f430
      Noah Moss authored
      * first pass at parameters in text cards on FE
      
      * trying to get translations working
      
      * relative datetime formatting
      
      * copy changes and 'Text card' header
      
      * default text when no params
      
      * hide header for text cards with height of 1 with params when in param mapping mode
      
      * show UI text in mobile mode
      
      * minor fixes
      
      * enforce that a text card variable can only be mapped to one parameter
      
      * more value formatting
      
      * noop
      
      * fix backend tests
      
      * add back a couple pieces of frontend logic commented out
      
      * misc cleanup
      
      * attempt at adding a FE unit test
      
      * revert unit test, doesn't work
      
      * add a couple of basic cypress tests and fix a couple of bugs
      
      * basic unit tests for cljc
      
      * fix error
      
      * expanded unit tests
      
      * simplify ns
      
      * add cypress test for instance language translation
      
      * basic handling for a couple cases of :date/all-options
      
      * trs docstring clarification
      
      * whitespace tweaks
      
      * fix cypress test
      
      * minor refactor of tag-names
      
      * move cljc file from utils to new parameters dir
      
      * reorder functions
      
      * fix lint
      
      * add test assertion that locale is correctly reset back to english, and add a comment
      
      * fix bug where existing parameter mapping target was not being found
      
      * clojure logic tweaks
      
      * move text card header text to the Text component config
      
      * simplify header logic, and pull out isLoading into a function to reduce complexity
      
      * address alex's css feedback
      
      * fix trs comment
      Unverified
      e136f430
    • adam-james's avatar
      Remove `update_collection_tree_authority_level` from PUT collection/:id (#23718) · 61ee8a35
      adam-james authored
      * Rmv `update_collection_tree_authority_level` from PUT collection/:id
      
      * Remove test that was only testing the removed feature anyway
      Unverified
      61ee8a35
    • Howon Lee's avatar
      Port fixes for Postgresql JSON issues to MySQL (#23758) · a26cba2a
      Howon Lee authored
      #22967 and boat#161 also apply to MySQL JSON implementation. This PR ports the fixes for those (in #23278 and #22997 respectively) to MySQL.
      
      There are many other JSON implementation fixes which don't need a separate port because we moved the describe-table mechanism to the SQL JDBC driver instead of just the Postgres driver.
      Unverified
      a26cba2a
  11. Jul 11, 2022
  12. Jul 09, 2022
    • Howon Lee's avatar
      22732 rework (#23722) · b6158cf2
      Howon Lee authored
      Pursuant to reopening of #22732.
      
      Proximate cause is occasional Java bigints getting in where the types assume it'll only be Clojure bigints. On the current reproduction I have it fixes the bug but I am still determining ultimate cause and if there are other phenomena. There was also differential behavior of backport vs. on-master implementation which I still don't understand.
      
      This one needs some sync to have gone through, or else it will prehistoric empetuses towards decelization
      Unverified
      b6158cf2
  13. Jul 08, 2022
    • Braden Shepherdson's avatar
      Add serialization hierarchy, serialize Database, Table and Field (#23622) · c952c87c
      Braden Shepherdson authored
      Serialization of Databases, Tables, Fields
      
      This brought a few core changes:
      - Add `serdes-entity-id` to abstract the field used for the ID
      - Pass the options to `extract-one` so it can eg. do encryption things.
      - Handle dates in YAML storage and ingestion
      - `:serdes/meta` now holds the entire hierarchy, not just the leaf model+ID pair.
      
      There's an open problem here about the right way to handle secrets like
      a database's password. Do we assume both sides have the same
      `MB_ENCRYPTION_SECRET_KEY`? Provide a serdes-specific password the user
      just made up, and every secret gets decrypted with the source key, encrypted with
      the serdes key, stored, decrypted with the serdes key, and encrypted with
      the destination key?
      Unverified
      c952c87c
    • Anton Kulyk's avatar
      Update model caching schedule API (#23734) · fc54bd83
      Anton Kulyk authored
      
      * Replace interval-hours and anchor-time with cron
      
      Removed the two settings and replaced with a single cron schedule setting
      
      Renamed the /set-interval endpoint to /set-refresh-schedule
      
      * Ignore "year" part in schedule endpoint
      
      * Fix variable
      
      * Use a temp scheduler and initialize the refresh job
      
      Running into errors when updating the triggers for each database's
      refresh job because the "job" itself didn't exist. Reminder of what's
      going on here: There's a single refresh job. It has a trigger for each
      database. So updating the trigger would fail since it doesn't exist
      since there was no job to hold the triggers.
      
      This error is quite clear in the tests run locally:
      
      ```
      ERROR in metabase.api.persist-test/set-refresh-schedule-test (util.clj:421)
      Uncaught exception, not in assertion.
      
              clojure.lang.ExceptionInfo: Error in with-temporary-setting-values: Couldn't store trigger 'DEFAULT.metabase.task.PersistenceRefresh.database.trigger.1' for 'DEFAULT.metabase.task.PersistenceRefresh.job' job:The job (DEFAULT.metabase.task.PersistenceRefresh.job) referenced by the trigger does not exist.
          location: metabase.public-settings/persisted-models-enabled
           setting: "persisted-models-enabled"
             value: true
      ```
      
      But this logging is absent from the logging in Github annoyingly:
      
      ```
      FAIL in metabase.api.persist-test/set-refresh-schedule-test (persist_test.clj:26)
      Setting new cron schedule reschedules refresh tasks
      Setting :persisted-models-enabled = true
      expected: "0 0 0/12 * * ? *"
        actual: (nil)
      ```
      Whic doesn't give us the error, just the test result.
      
      * update to newer API `set-interval` -> `set-refresh-schedule`
      
      ```
      ;; old
      {:hours 4}
      
      ;; new
      {:cron "0 0 0/1 * * ? *"}
      ```
      
      Co-authored-by: default avatarCase Nelson <case@metabase.com>
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Unverified
      fc54bd83
Loading