Peter: Good afternoon, everyone. My name is Peter Jennings. I have Matthew Fisher on as well. We’re gonna go ahead and get started. Tonight, we’re gonna talk about insightsoftware Analytics and Reporting for D365 and AX. One of my favorite sages is Yogi Berra and we’re gonna talk today about forks in the road and decisions to make and so we’ll keep coming back to him.
So first, a little bit about us. We’ve been around about 60 years in combined experience. We have 500 employees. We are the number one report provider for all ERPs across the globe. And I’m very confident that we’re the number one report writer. I’m confident in both, but extremely confident that we’re the number one report writer in Dynamics. We have over 17,000 customers in every type of market and business you could think of. Some of these are huge names, some of these are medium, and some of them are small.
We provide a different level of solution up and down the channel and have done very, very well. At the upper end of the market, D365 Business Central, D365 Finance & Ops, and AX, we have almost 4,000 projects under our belt. Of those 4,000 projects, about 3,600 are still active and working with us. We think there’s actually a few more that are on the older releases, AX 2004 for instance, that are still using it, but they’ve dropped off active maintenance because we’re not updating for those anymore, but a very high retention rate and a very, very solid group of customers.
Okay, so that’s a short introduction on us. We’re gonna talk a little bit today about direct versus indirect reporting in the D365 space, what the challenges are, and then we’re gonna take one of those forks, and we’re gonna go down one with the demo on one of the two sides.
So the biggest issue is data. If the data was simple, we probably wouldn’t be sitting here. It’s not. It’s gotten more and more complicated. As they have built out D365 and AX, they have added the tables, split data to the point to where it’s now normalized beyond what most people would expect, five, six tables to do a name and address listing, depending on if you’re international or not. So that’s a lot of tables to do something that’s simple. So the data is very spread out and finding it is very hard.
On top of that, they’re evolving into multiple platforms. So they have AX. With a set of tables, they’ve now gone to D365 Sales versus F&O and they’ve now split finance and operations. Operations is going to get a new name, which is Microsoft, of course they are, that’s actually gonna be called Supply Chain. So they’ll actually be three islands of data plus the Salesforce or outside sales products, if you wanna put everything together for an overall solution. So it’s getting more and more complex.
There’s also the challenge of how do you get at your data. In years past, you could go directly to the AX data. We have a very solid reporting tool for that, very strong. It went directly into the AX platform, figured out what all the relationships were and everything, how everything fit together. That’s changed with D365. We now have to go through, currently, what are called data entities. This started as an abstraction, which was a view, it used OData, which caused performance problems. Microsoft has now encouraged people to move over to what is called BYOD. So they’re now taking the data entities and creating a physical table of the database. It’s probably a SQL, no more complex than a SQL table. It’s how you could look at it. It’s sitting in a BYOD and you can go at it and run your reports from there. That’s evolving.
They’re now going to move away from BYOD. In fact, they announced the end of BYOD recently. And they’re moving into data lakes. And data lakes is a repository of raw data. So what they’ve done is they’ve gone from raw data to a composite view or a composite tables and it looks like they’re gonna go back to raw data tables. So it’s an evolving process and it’s something that you’ve gotta be careful about how you aim and how you hit. So we do reporting directly. We do reporting indirectly. The indirect reporting requires the BYOD, of course, so does the direct reporting right now, but that may go away and we may go back to direct reporting through data lakes.
We have data warehouses and cubes that we support. Our data warehouse and cube solution also supports Power BI. So we can very easily enable management and the operational people with the same data. That’s very important. We can also combine data from any source and you’re gonna see that from Matthew today.
So a simple picture of what our system looks like when we’re reporting directly. We literally go out and read. When we’re reporting indirectly, it’s a little trickier. We can pick up the old AX on-premise data, which a lot of people are having us do. We mix that in with the D365 information and we use Microsoft’s export services. And then on top of that, we put our data manager, which merges the data from these two sources together, drops into a staging database and then out into data warehouse and analysis services.
These can be either tabular or OLAP. And then you can read this, whatever you want. Another add to this is that we can pull data from anywhere else and merge it in with this. So a lot of our larger customers are quite pleased because they’re able to take the data that they have currently, their old data maybe some information from one or two small systems on the side, put it into one place and get everything out. So it’s worked really well.
So we’re gonna go down that road to the right. We’re gonna take the fork that makes the use of both Power BI and report solutions and it’s gonna run down and look at that. It fits a model of data. If you look at these small amounts of data, direct reporting tend to work very well. The larger your data gets, the more likely a data mart’s gonna come in into being. And it’s going to be faster and more efficient, not impact performance. And when you get into an enterprise level and start mixing data from multiple sources, it becomes imperative. And most of the larger companies out there understand this and are moving for that.
So a couple of things I want you to think about. You have a life cycle here. You would select a system, of course. Once you’ve done that, then the real work starts. You’ve got an initial project to define and develop your data and all of the reports you want. That’s a fair amount of work. Once you’ve built that, then you deploy it. And the main element here is reaction. How fast can you react? It was close, it was good. But as in every project like this, there’s little changes and modifications that need to be made. And then of course, once you get going, your success is gonna bring more work and it’s certainly not gonna be over. In fact, you’re probably just starting down the road of this.
So as you go through this, look today at our solutions, I want you to picture how we would compare against the more traditional tools out there for costs and timing. How much effort is it gonna take on your side and the consultant side? And really important, once you bring this out, how fast can you respond to changes that you need and underlying our entire solution and self-enablement?
So these are the things, if you would please keep in mind. We’re gonna now switch over to Matthew and have a look. He’s gonna actually get into the system itself and go ahead and have a look at how this works, how we’ve put things together. So, Matthew, you’re on.
Matthew: Okay. Thank you, Peter.
Peter: You’re welcome.
Peter: Yes, we can see your screen.
Matthew: All right. I was just gonna ask, so thank you. Thank you. So thanks, everybody. What we’re looking at here is called the Jet Data Manager or the JDM is sometimes how I’ll refer to it. And this is our ETL automation tool. ETL is a common a term in the BI world. It stands for extract, transform, and load. All right?
So basically what we’re doing is we’re extracting data out of one or more data sources. All right? This data source happens to be a D365 F&O BYOD database. So that’s my extract, my E in ETL. We’re extracting from this data source. We’re transforming it up here in a staging area. Okay. So the transformation can be something as simple as maybe like cutting down the number of characters in a field, or maybe changing a data type on a field, or it could be something as complex as maybe looking up data in additional tables, maybe merging data together. There’s a whole host of different things and I’ll go into that in some more detail. But the staging area is the transform area. And the L in ETL stands for load. Finally, we’re loading that data into a set of data warehouse tables and a set of analysis services cubes, okay?
So let’s start down here at the extract. Let’s start down here at my data source area and I’ll just use the customer table as an example. That’s kind of an easy one to go through here. So on the left-hand side, we’re gonna see the all of the fields that I’m including in this project, okay? All of the fields that I’m pulling in. On the right-hand side, I’m gonna see all of the fields in the table, okay? So this is important because this is an example of how easy it is to just add additional data points into your project. So for instance, let’s say that somebody comes up and maybe the CURRENCY field, maybe they want that exposed on their customer table, okay? On the right-hand side here, I’m just gonna find the CURRENCY field here and tick that box. Now, that’s gonna be pulled over in my project and it’s gonna be included. It’s that easy. Okay.
So if I scroll up here into my staging area, remember the transform in ETL. What I should see is that Cust Table is red. That means that I’ve made a change to the structure and I haven’t deployed that out to my user base yet. All right? So everything that I’ve done so far is kind of in limbo. All right? It’s in kind of a holding pattern. All right. And by the way, something I forgot to mention at the get go, what you’re seeing here, I’m referring to this as a project. Inside of the JDM, this is a project, an ETL process. Okay.
All of this is a prepackaged project that is downloadable from our cube store. So that’s usually great news, especially to an IT group because probably 85% or 90% of all of the data mappings, all of the tables and the structure of this BI environment is already done for you. All right? So that’s a huge head start. But let’s get back to our transform process. Here’s my customer table. There’s that CURRENCY code field that I was talking about. Let’s talk about some of the other…the functionality inside the staging environment.
These fields up here on the top, these purple tables, those represent custom tables inside the system. So we can pull tables in, directly from a data source or we can create a table from scratch and fill it with custom data if we want. So you have that option. Some examples of why you would use a custom table might include… Let’s say, you’re pulling in data from two different ERPs. Maybe there’s a legacy ERP system and then you upgraded to D365. So and subsequently maybe your chart of accounts changed at the same time. You would want to take that legacy data. And if you wanna merge that in, you need some way of mapping the old chart of account structure to the new one. And that’s what you might use a custom table for, maybe like a mapping like that. Okay.
Some other things about staging here of these fields, these regular-colored fields, those are coming directly from my source. The red fields, those are what are called conditional look-ups. So this is another example of the transformation that I was talking about. If you’ve ever seen an AX or a D365 database you know that the data is spread out among many, many different tables. I think there’s like something like 7,000 tables in the latest release. Okay. That means that it’s very efficient for transaction entry, you know, a relational database. That’s what it’s about, storing the data only once. But that’s not always necessarily as efficient for reporting, right? You don’t wanna user…for instance, a customer listing, the account number is in one table, the name is in another table and the city, the part of the address, is in yet another table, okay?
So we don’t want our end users to have to go through that labor-intensive process of joining a customer table with a name table, with an address table, just to get a customer listing, okay? So what we’re doing here in the JDM is we’re doing all that work for you. So I’m going out to this additional table. I’m looking up the Name, and I’m including that with my customer table, doing the same thing for all of these other red fields here.
Okay. So what we’ve done so far is we have done the transformation, but we haven’t done the load part yet. So I’m gonna double click here on my data warehouse. This is the actual customer end-user-facing interface right here, okay? And here is my Customer table, okay? So this is what my end users see. So you can see that I’ve gone through and renamed everything to be more user-friendly, proper case, make it look nice and neat.
If I wanna pull that CURRENCY code field in, I’ve got that field right here, I can just click and drag it over, probably rename it. Okay. Again, that table turns red. That means that there’s been a change but it hasn’t been actually deployed out yet, okay?
So that’s as easy as it is to create an additional data point and put that in your data warehouse. So that’s some pretty simple stuff. But what if we wanna do something a little more complex? Let’s say that we have an additional data source that we wanna pull data in and either create separate data warehouse tables or maybe even merge in with our existing one or some of our existing tables. That’s gonna be really easy to do.
We can connect to a wide, wide variety of different data sources. I don’t know that I’ve ever heard of a case where we can’t connect to a data source. We’ve got your basics here: SQL, Oracle, DB2. Basically, anything ODBC or OLE DB compliant, we can connect to. We can connect to Excel spreadsheets. They kind of just like present themselves as tables inside the interface here. So that makes it kind of handy. Text files, maybe you have a very old legacy system that they don’t allow connectivity, but they will allow like an export to a text file, we can go out and we could read that text file in. Maybe there’s a multiple files inside of a single folder. We can go out to that folder and read those files in and then process them subsequently.
If you don’t see what you need on the list here, we’ve also partnered with a company called CData and CData is a company that specializes in ODBC connectors for different systems. Especially in the world of cloud computing, this becomes especially important because, you know, if you have a cloud-based system like Dynamics 365, they don’t allow you direct connection to their database anymore. So you don’t have a SQL server or an Oracle database either on-premise or in the cloud that you can connect directly to. If it’s a cloud-based system, typically, they have like a web service or maybe like an OData connection and that’s how you access the data in the cloud. So in order to do that, we’re using one of these CData connectors to go out to that cloud system and retrieve the data.
We’ve also written a few adapters of our own. We do specialize in the Dynamics line of products. So we’ve got a NAV Adapter, a regular on-premise AX Adapter, CRM, GP. We’ve also written a Salesforce Adapter. But like I said, you’d be kind of hard pressed to find something that we could not connect to. Let’s just say… I’m gonna use an example here with a local SQL server database, a CRM database, I should say. Okay. So I’m just gonna define that connection. Make sure I can connect to it here.
And then nothing is showing up yet because I have to read those objects in from that data source. So the system is gonna go through and cycle through. It’s gonna present every single table and field, every view that’s in that database on the right-hand side here.
So let’s say that I want to maybe…maybe the CreditLimit, for instance, is stored in CRM instead of in your ERP for some reason, maybe the salespeople maintain that in their CRM database. But for our purposes of reporting, I wanna include that in my customer table inside of my data warehouse. So I’m gonna go into this account table in my CRM database, select the field. I have to have a field to link to my customer inside of Dynamics 365. So I’ll probably just use maybe the AccountNumber field to do that. It’s gonna depend on the data source, but that’s a pretty safe bet.
So once I’ve got those two things selected, if I look over here in my data source, I’m gonna see those two fields. All right. And then if I hop up here to staging, I should see a CRM table in my staging area. And there it is. I’m just gonna click and drag this up just so that we can see what we’re doing all together here. So there’s my CRM data. This is my regular customer table. Look how easy this is to link these two tables together and grab this information. I’m just gonna click this AccountNumber field. I’m gonna marry that to this ACCOUNTNUM in my customer table. It knows I wanna create a relationship or a mapping.
And then I’m gonna do the same with the CreditLimit. It knows I already have a relationship between the tables set up, so it’s gonna create one of those conditional lookups to pull that CreditLimit field in. Now, my final step would be to go out to that customer table, grab that CreditLimit field, and just pull that in. And again, probably rename it to match my naming standards. I’ll tell them it’s from CRM. Okay. Now, once I deploy that out, that Credit Limit is gonna be in the customer table just as though it had been stored in the ERP all along. Okay.
So what I just showed you, that represents, probably, I mean that could be hours of work in a standard ETL tool. That’s what is so valuable. One of the things that’s so valuable about Jet Analytics is the ease and the efficiency that you can go in and work with your BI models here, like I said, joining additional data sources. If you had to script all of that out in SQL, it could be it could represent hours and hours of effort on your part.
Peter: Matthew, if I could ask, the system read that CRM database really, really nicely, can it do that with any database? Could it do that with the Oracle database for instance?
Matthew: Absolutely. Absolutely. Like I said, connecting to Data Sources is… It doesn’t care what it is. We can connect to a wide, wide variety of different data sources. I can even pull in data from an additional project, which is what I’m doing here. I’m actually pulling in data from an AX project. So in this scenario, let’s say you have an on-premise AX database and then you’re converting to Dynamics 365 in the cloud, we can pull in the data from both and we can merge those together.
So this AX Business Unit, this represents data from my on-premise AX database. I’ll just move that over there so we can see both. And I think in my Finance Transactions table, I’ve actually joined the data from both. So if I preview this table, I’m seeing information from my D365 database, but also, it’s going to be joined in with my AX database. All right? So makes it really, really easy to merge data in from multiple sources like that, very, very easy.
Let me close a couple of these down. We haven’t talked about OLAP cubes yet. I’ve got a set of… I’ve loaded my data into my data warehouse. Now, what am I gonna do with it? I wanna take that data and I want to create a set of cubes with it, OLAP cubes. So I’m just gonna double click on this here. Again, this is something that’s out of the box here. I’ve got AP, AR, Finance, Inventory, Purchasing, Sales. We also have a manufacturing cube that comes a prepackaged for the system. I don’t have a loaded in my demo system here yet, but the system is gonna come prepackaged with the global dimensions that you see here. And then these cubes, the cubes are gonna come with Measures and they’re also going to consume those global dimensions down here, okay?
So let’s say that over here in my Customer dimension, it’s gonna pull this data from the data warehouse. But, let’s say, I wanted to add that Currency Code as a way to slice and dice my data. This is all it’s going to take. I’m gonna add that as a quick level. Now once I deployed this out, any of these cubes that are consuming that Customer dimension, you can see the Customer Account here, the Invoice Account, those are both Customer dimensions. Any of those, those are gonna automatically include that Currency ID and you’re gonna be able to slice and dice that data by the Currency ID. It’s just that easy. This is where the efficiency really shows itself in the JDM here when you’re working with these OLAP cubes because programming these and deploying these cubes can be very labor-intensive and it can take a lot of scripting, a whole different skill set to make that happen.
So up here in the Measures, Measures are the numbers that we’re working with. Dimensions are the way we look at those numbers. So I can add some of your basic Measures, you know, Standard, Derived, Calculated. Those come directly from those fact tables. All right? I can also create some more complex type of measures, right? And in order to do that, I’m gonna use something called a snippet. And a snippet is really nothing more than kind of like a template or almost like a wizard that allows you to create some of these more complex measures. Inventory is an example, maybe Inventory Turns or COGS. That’s a particular calculation that you can use a snippet to go out and configure. Time-sensitive measures can be particularly complex sometimes.
So what I’m doing here is I’m able to use the Snippet to make it a lot easier. Like, let’s say, a Year-to-Date calculation. All right? I’m just gonna feed it where my measure is coming from and a couple of parameters. And then as you can see, the system is gonna script that out and deploy that measure for me. Okay. Everything you’re seeing so far, everything that I’m showing you here is the JDM tool is scripting all this process out and it’s deploying it to the Microsoft stack in the background. So that’s important to note just because for a variety of reasons, but mainly for compatibility.
So the output of Jet Analytics as a straight SQL database and a set of analysis services cubes. So we’re not using a proprietary technology to create this data warehouse database and these cubes. We’re not requiring you to use our tools to consume that data. So you don’t have to use our front-end data-visualization tools. You can use things like Power BI, SQL server reporting services. Basically, anything that can consume those Microsoft data technologies you can use to connect to our data warehouse and cubes and consume that data. Okay. So that’s also important to note.
A couple of other things about the JDM here. When I make a change to a table, what I’m gonna do is I’m gonna do a Deploy and an Execute. So here, I’m just gonna do it at the staging level here and I’ll tell it to only do what’s changed here. So basically, what Deploy is gonna do is that’s gonna make the change to the database structure. So the additional data fields that I pulled in, those are being deployed by a Deploy step. The Execute step is what deals with the data, right? So if you just deploy, especially like an added field like that Currency ID field for instance, if I just deploy it without executing, it’s gonna be empty. The field will be there, but there won’t be any data in it. So when I execute, that’s what actually pulls the data over. All right.
One of the things I like to kind of show here is… I was talking about the system scripting these changes out. These are all of the steps, by the way, that the system went through to deploy those things. We even go so far as to…we show you the exact script that’s being deployed and we even give you the ability to override it if you want. So maybe your DBA, they take a look at the script and they think, “Well, maybe this could be a little more efficient.” Or maybe they have a certain standard that they follow that they wanna make a little tweak here and there to the script. We give them the ability to do that. But again, it’s all being deployed out for you to that Microsoft stack in the background.
Just as I executed the data transformation for those additional fields, we also need an execute step to refresh all of the other data. Okay. So this is not a life connection to the database. This is a snapshot of your F&O data. So subsequently, we need to be able to refresh that data in the data warehouse so that it’s current. And that’s what for using this execution step for or this execution tab here. We can go in here, we can add a Schedule to the execution. So I can schedule this to run a particular time of the day, but maybe every two hours, every 15 minutes, whatever we need. I’m gonna go ahead and edit this Execution Package here.
So in refreshing this data, you don’t have to do everything all at the same time. You don’t have to do everything nightly or whatever. You can decide at which interval you wanna do these separate pieces, if you want. So for instance is my Sales cube here, maybe I want that refreshed every two hours. Whereas, my maybe Accounts Receivable that can just go nightly that doesn’t need to be as frequently updated. We can separate those different pieces and we can execute them separately.
Okay. System comes with a full set of Security. And again, it’s security that you can design in the tool here and then it gets deployed off to SQL server in the background. So we’re still using SQL server very robust Security, but the tool, it’s a really nice gooey interface that lets you design it. And it makes it easy to design and kind of visualize in front of you. So that can be really nice.
Let’s see, a couple of other things about the JDM that I like to bring up. The system is self-documenting. So I’m just gonna choose Documentation here at the project level. What this is gonna do is this is gonna create a linkable PDF document. So this is usually great news for an IT group. It’s usually the last thing that gets done is documentation because nobody likes to do it, unless that’s your job of course.
But what the system is gonna do here is it’ll go through every data point, every object in the project, and it’s gonna document that in a linkable PDF document. So I’ll be able to find those fields that I added to my data warehouse and you can basically trace them all the way back through your ETL process. It makes it really nice for a change management or IT audits in particular. So this is the output here. And I did it at the entire project level. So it’s 738 pages. So you can tell it’s pretty involved here. But you can also do it at separate points in your project. You could just do your staging database or maybe just your data warehouse if you want to.
But here’s that Customer table I was working with. Let’s see that Credit Limit field. I got that from CRM, I think. Right? So if I click on it, it’s gonna go through, tell me where I got that from. And then, I can continue to drill back and see the exact, the origin of the field and exactly where it’s coming from and how it’s being mapped. Okay.
So let’s see. I think that about does it for my high-level overview of the JDM. Peter, is there anything you can think of that I missed or that you’d like me to include?
Peter: No, I think that was a pretty clear shot. I’m just wondering out loud if somebody would like to see the results of this, maybe at a high-level Power BI or a high-level report just so they can kind of see how easy this makes everything to use.
Matthew: That is a good idea, actually. Let me bring up my Power BI. So usually, at least when I use Power BI, I’m attaching to a cube just because that’s probably the easiest thing, the easiest way to get my data out and onto some type of data visualization inside a Power BI. So I’m just gonna, say, Get Data here. And then, what I’ll do is I’m gonna choose an Analysis Services database and that’s one of those, one of the outputs of my analytics project was a cube. Okay.
So let’s go into my cube here. I’ll just…I have a Sales cube. That’s usually a good one. I’m connected to some demo data here. I know the Sales cube has some good data in it. So let’s start out, maybe with…take a look at Sales. These are all of those measures and dimensions that you were seeing in the cube that I showed you. So let’s just grab Sales Amount, maybe, by Item and then maybe we want like the top 10 to show up, okay?
So here’s my top 10 items in the system. Who’s selling those? Maybe we wanna know who’s selling our items. So I’ll add little pie chart with salespeople on it. Okay. Let’s see. Maybe I also want to know where are those sales happening, right? So let’s throw a little map on here. Okay. Again, we’ll do Sales Amount and then let’s take a look at it by city. And this gives us a little heat map, which is kinda nice. It’s gonna give us the actual location of where those sales are happening, okay?
So again, on the right-hand side here, what you’re seeing, this is all the same output that any other tool, the Jet Reports tool, the Excel add-in is going to connect to cubes as well. It’s the same thing. So you’re gonna have govern data. Anybody using Power BI against these cubes is gonna see the same exact same data as someone using maybe Excel to connect to it. It’s the exact same thing. Okay. Maybe I wanna do KPI with profit percentage, and profit. Pretty easy stuff. Basically, it’s all hooked together. So maybe I want to see April Sales here. If I click on it, it’s gonna adjust all of these. They’re all gonna be linked together and they’re all gonna consume that cube data, kind of, all at the same time. So…
Peter: Okay. It’s very good. We’ve got a question from the audience. They would like to know, “Can you secure the data or how do you secure the data especially if it’s for multiple data sources?”
Matthew: Very, very good question. So let me come back over here to my JDM and I’ll just, I’ll show it to you on the data warehouse level. You can secure the data, different areas, but I’ll just use the Data Warehouses as an example. Okay. So the first thing I’m gonna do is create a Database Role. Maybe I’ll just call it Sales. Okay. I’m gonna add logins. This is gonna go and pull in logins from my SQL server instance. So I can grab either a single user, I can add either directory, groups, or users, whatever I want. Okay.
So now that Sales role has been created, now I wanna do the Security setup, okay? So you can see the Security role, Sales is here. I can do it at the table level, if I just select Tables on the left here. Maybe I wanna give them access to the Sales Unit table. I’m just gonna click on that. It’s got a green arrow. That means that they have access to that.
Now, I can also go in and be more granular with this. Let me expand this a little bit so we can see it. So let’s go into my Sales Posted Transactions table and I want to maybe give them access to just certain fields in that table. I can do that here. Okay. Once I okay this and deploy them, it’s gonna create that database role in SQL and then it’s gonna deploy those permissions out there automatically for me.
We can do other things here such as we can do row-level security. It’s a little bit more of a complex topic, but what you do is you can set it up so that maybe one particular financial dimension is only accessible to one group or maybe you have a group of sales managers and you only want them to see the transactions in their territory or something like that. That’s called row-level security. So we can set that up here, we can deploy that out and then that is gonna stay at the data warehouse level. So no matter what you are consuming that data with, if it’s a Power BI or if it’s Jet Reports or SSRS, that security is gonna follow you wherever you go.
Peter: Okay. I think from an overview standpoint and introduction standpoint, I think we’ve kinda hit on, not all the high points, but a good number of them. We’re certainly available for further questions if you come back to your host. I’ll ask right now, if I should. Melissa, are there any other questions from the audience?
Melissa: Hi, Peter. Checking out the question pane right now, and I don’t have anything else on there. But just want to let everyone know the question pane is open, if you have anything you’d like to ask them.
Peter: Okay, good. Well, Melissa is your host and the company, of course, Encore. If there’s anybody would like follow up or any questions that you have, if you would please channel those through Melissa and team, that would be good. We’ll be happy to respond and answer however we can. Otherwise, thank you, everybody, enjoyed you as an audience. Nice and quiet.
Matthew: Thanks, everybody.
Melisa: Great. Thanks, Peter, and thanks, Matthew.
Matthew: Hope you have a great day.
Peter: Bye bye.
Matthew: Bye bye.
Read Case Study of a Manufacturer's Dynamics 365 Implementation with Encore
"We met our three project goals of 100% completion of critical business requirements at Go Live, completed with 90% Best Practices or better, and GO Live done in a timely manner."Read Case Study