Introduce the DAX EvaluateAndLog Function

There is a hidden gem in the August 2022 release of Power BI Desktop: a secret new DAX function EvaluateAndLog that is not listed in the feature summary of the official blogpost. Yet this function will become a great learning and debugging tool for anyone who writes DAX code in the years to come. Microsoft has a good reason not to announce this function publicly yet. There are a couple of known bugs and, more importantly, the work for intellisense support in the DAX formula bar has not been completed. Therefore if you try the function today, you won’t see it on the suggestion list and, if you type the function name yourself, you will see the red squiggly line underlying the function. But the expression should calculate just fine in the latest version of Desktop. Ironically, the lack of intellisense support is really a blessing in disguise in my opinion. I’ll explain why I think so at the end of this blogpost.

So what is EvaluateAndLog for? Let’s say you define a measure [Profit] with the following expression and add it to the report.

[Profit] = Sum(‘Sales'[SalesAmount]) – Sum(‘Sales'[Cost])

You want to double check if the numbers you see on the report are correct. In the past, you would add the two columns [SalesAmount] and [Cost] as values to your report temporarily to inspect the numbers. Now there is a new way without changing your visual: you can temporarily change your measure definition to

[Profit] = EvaluateAndLog(Sum(‘Sales'[SalesAmount])) – EvaluateAndLog(Sum(‘Sales'[Cost]))

The measure will return the same result so your visual doesn’t change but the modified expression will cause the DAX Engine to send the intermediate results of both Sum(‘Sales'[SalesAmount]) and Sum(‘Sales'[Cost]) to a secret channel, I call it “debug output”, for you to inspect. In other words, you can use EvaluateAndLog to send the intermediate result of any portion of your DAX expression to debug output without changing the result of the original expression. Let that sink in: any intermediate result of your DAX expression. Can you imagine how powerful this is? Not only can you see the values of the two individual Sum’s before the subtraction, but you can also use EvaluateAndLog(‘Sales’) to inspect some of the raw data that feeds into the two Sum’s!  I’ll explain how to do it in a future blogpost as this feature is too big to be covered in a single one.

At this point, some people might have realized that EvaluateAndLog is like the “print debugging statement” in other programming languages. As a professional software engineer my entire career, I have witnessed firsthand how the debugging tools and practices had advanced leaps and bounds in the past decades. In spite of all of the progress, “print debugging statement” still accounts for most actual debugging activities in the real world because no other tool can replace the convenience and versatility of a simple “print” and its many variations in different programming languages. So EvaluateAndLog opens the door to a whole new world of DAX learning and debugging.

Now that you are all hyped up, let’s start with the syntax of EvaluateAndLog:

You can wrap the function around any portion of your DAX expression that returns either a scalar value or a table. When the expression is calculated, DAX Engine sends the intermediate result in JSON format as the content of the TextData column of an Analysis Services Trace Event, called “DAX Evaluation Log”.

Now that we have covered the mechanics of EvaluateAndLog, the challenge is how to retrieve and visualize the output of the function. I believe that, in due time, Power BI Desktop and other third-party tools will add the ability to display the output of the function in a user-friendly fashion. Before it happens, if you must use an officially supported tool from Microsoft, you can download and install the Microsoft SQL Server Profiler which is a component of the free SQL Server Management Studio (SSMS). Afterwards, you have to find out the port number of Power BI Desktop, connect SQL Server Profiler to Desktop, subscribe to the DAX Evaluation Log event and other relevant events, and then you can see the output of the function. Below is a screenshot of SQL Server Profiler displaying the output of EvaluateAndLog.

Going the SQL Server Profiler route is neither easy nor user-friendly. To deliver the benefits of the new way of debugging DAX to the Power BI community sooner, I rolled up my software engineer sleeves, learned some basic UI programming, developed a Windows desktop application on my own. You can download the free software, named DAX Debug Output, from here and install it on the same machine as Power BI Desktop. You need to have admin rights on the machine and agree to install the software from an unknown publisher as this is my own project, not a Microsoft product. Below is an annotated screenshot of the main window of the application. I will explain how the application works in my next blogpost.

EvaluateAndLog is currently enabled only in Power BI Desktop but disabled in other Power BI products. The function doesn’t do anything in the environments where it is disabled other than passing through the result of its child expression. When enabled, the function may disable some DAX optimizations so it’s imperative that you delete the function from your DAX code after you are done with debugging. This is where the red squiggly line due to the current lack of intellisense support comes in handy.

It’s a convenient way to remind you that you should fix the expression before publishing it to Power BI service. Although the function has no impact in the service today, it could cause performance degradations once Microsoft enables the function in more production environments.

We have barely scratched the surface of EvaluateAndLog but this is already a long post. I plan to write more posts on this subject in the following weeks. In the meantime, if you can’t wait to play with the function, here is the link to DAX Debug Output again. Download both DAXDebugOutputSetup.msi and setup.exe to your machine and run setup.

https://github.com/pbidax/DAXDebugOutput/releases/tag/v0.1.0-beta

Introduce the DAX EvaluateAndLog Function

8 thoughts on “Introduce the DAX EvaluateAndLog Function

  1. Chris Webb has found a bug when he tried
    CountRows(EvaluateAndLog(Sales))
    There is another known bug for patterns like
    Var _ = EvaluateAndLog(Sales) Return CountRows(Sales)
    Both bugs will cause query failures.
    We will fix both bugs by patching the August release of Desktop.

    Liked by 1 person

  2. This looks great!

    I have a strong opinion that I will probably be screaming about until this is GA an operable in the service:

    > EvaluateAndLog is currently enabled only in Power BI Desktop but disabled in other Power BI products. The function doesn’t do anything in the environments where it is disabled other than passing through the result of its child expression. When enabled, the function may disable some DAX optimizations so it’s imperative that you delete the function from your DAX code after you are done with debugging.

    > Although the function has no impact in the service today, it could cause performance degradations once Microsoft enables the function in more production environments.

    Any feature whose best practice guidance is “write and rewrite your code” is poorly implemented and begging to introduce unnecessary human error. There should be some sort of user-controllable switch or flag to toggle whether this function does work (and potentially impacts performance) or acts as a no-op.

    Ideally there would be a flag settable in the connection string to control whether this function will act as a no-op or have potential performance impacts. If that is not possible, then it should be a per-model setting. If that is not possible, then we should be able to toggle it for the Tabular instance. If this introduces difficulties in the Power BI Service, then deal with it. I realize this is a hard-line stance, but I feel it must be.

    Asking users to write and rewrite code is an anti-feature.

    Like

    1. There will be control to turn it off in production environments. That’s exactly why it’s disabled everywhere else except in Desktop until Microsoft figures out what level of control is given to the users. Desktop should scan the model for any EvaluateAndLog accidentally left over in the pbix file before publishing it to the Service. If that won’t happen soon I can add it to DAX Debug Output pretty easily.

      Like

      1. > Desktop should scan the model for any EvaluateAndLog accidentally left over in the pbix file before publishing it to the Service.

        What would the result of such a scan be?
        If it is a scan to suggest to a user that they alter their code, e.g., “Hey you have used EVALUATEANDLOG, and it may have a performance impact. Please consider removing these calls,” that is not great. It pushes us right back to the bad result of “best practice is for users to write and rewrite code.” Again, asking users to write and rewrite code is an anti-feature.

        If it is a scan to automatically remove these calls, that is terrible. Deploying should not alter the artifact being deployed. Also, this seems like it would break the ability to download a PBIX from the Service and get back the thing you deployed (modulo data refreshes). I should be able to round-trip my code from dev into prod and download it again and not have my code modified. Even if you can re-inject the calls on PBIX download, that is a negative. I do not want to be running different code in prod than I have tested.

        If it is a scan to tell the user to change some model setting (which would control behavior of EVALUATEANDLOG calls) before deploying, that is potentially worrying. If the setting can only be changed in Desktop, such that a redeploy is necessary to toggle the setting, that is not great. This would prevent us fro utilizing a valuable debugging tool for an artifact as it exists. If there is something we want to observe in a model, it is a bad idea to replace the model (redeploy) to then try to observe it, because we are observing a different artifact. If such a setting could be modified in situ for a deployed model, that is more tenable, as we can change a single setting to be able to observe the deployed artifact.

        I am open to the idea that I am missing a potential result of such a scan, and would love to hear ideas to that end. These are the three reasonable outcomes that occur to me for what a scan of the model would result in.

        Being able to observe code and data in situ is a hugely valuable capability. Having to replace or modify the code to observe it is not nearly as valuable.

        All of this is not intended as criticism of what is clearly new and still being figured out. I am excited to see better tooling for observability of PBI and Tabular models, and want to see it be as useful as possible. I am happy to discuss this in great detail if you want to follow up with the email account on these comments.

        Like

  3. You made valid points. This is why EvaluateAndLog will stay as a Desktop-only feature in the near future. We will continue to tweak the function especially its interactions with various optimization algorithms. The function can have non-debugging related purpose in the service, such as leaving an audit trail of model activities. If here is a high demand for debugging live in the service, we could consider leveraging LiveConnect or DirectQuery to PowerBI datasets to limit queries from within Desktop by users with permissions controlled by the service admin. An ideal experience is not to insert and remove the function manually, but rather having a real debugger experience by highlighting a subexpression or adding an expression to a watch list for the debugger to automatically inject the function during a debugging session.

    Like

    1. It sounds like there are some really powerful capabilities we can look forward to!

      Regarding optimizations, there seem to be three primary areas of concern:

      1. Potential laziness forfeited. If EvaluateAndLog logs the expression unconditionally, we would see, e.g., all branches of a conditional evaluated regardless of if they otherwise would be.

      2. Forcing IO involvement in otherwise SE-mostly queries, e.g., in an iterator’s inner loop – forcing every iteration expression to log out. As a specific example, something like SUMX ( ‘Sales’, EVALUATEANDLOG ( ‘Sales'[Quantity] * ‘Sales'[UnitPrice] ) ), which would otherwise be mostly SE, now every row of ‘Sales’ forces a log event. Or I guess any large input table, even if the query is not mostly SE.

      3. It seems this might force materialization of something which might otherwise not need to be materialized. E.g., CALCULATE ( [Some Measure], EVALUATEANDLOG ( ALL ( ‘Some Table’ ) ) ). Right now this is yields a notice, but if it were extended to support something like this it seems a potential big performance loss. Notice below
      {
      “expression”: “ALL( ‘Some Table’ )”,
      “notice”: “EvaluateAndLog function is not executed due to optimization.”
      }

      Are there other major areas of DAX where this impacts performance?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s