While companies chase massive datasets in the hope of making more confident decisions, Jared Armstrong, Managing Director of wHole Solutions argues that true insight and operational efficiency begins with getting the fundamentals right.
In the latest episode of acQuire Connected, host Jaimee Nobbs, sits down with Jared to unpack one of the most misunderstood concepts in modern mining: the role of big data versus master data in driving decision-making. The conversation explores why a strong master data foundation matters more than ever, especially as AI and automation reshape the geoscience landscape.
Listen to the full episode here or read on to find out the key takeaways.
While “big data” has become a buzzword across the mining industry, Jared Armstrong argues that the real game changer isn’t about collecting more data, it’s about managing the right data.
He explains that mining companies are investing more heavily in instruments and sensors that generate enormous volumes of information, known as big data. However, many overlook the importance of the fundamental data that underpins every resource decision: drill hole coordinates, geological logs, assays, and sample results. These are examples of master data, the verified, traceable facts that define what’s actually in the ground.
“You can’t just expect an AI system to tell you what’s in the ground without any drill holes,” he explains. “Someone’s got to do the hard work and spend the money to collect and validate the data properly.”
Without clear ownership, validation, and structure, the excitement around big data and AI quickly fades into confusion. Jared stresses that robust master data management isn’t about resisting innovation though, it’s about enabling it. A solid foundation ensures every downstream process, from modeling to AI analysis, operates on a consistent and trusted truth. In other words, big data creates volume; master data creates clarity, and clarity is what drives more confident decisions.
The conversation also explores the very real consequences of weak data governance. Jared recalls sitting in a boardroom with a company that had completely lost its coal quality database due to corruption and a lack of backups. “I could not believe what I was hearing across the table,” he says. “These data sets come together at the cost of millions, sometimes tens or hundreds of millions of dollars, and just like that, they were gone.”
The lesson is clear: data isn’t just a technical asset; it’s a financial one. Without effective governance “guardrails,” years of investment can disappear overnight.
Jared points out that the early signs of poor governance often start small: repeated complaints from internal “customers” such as downstream engineers or modelers, who can’t find or trust the latest data, or constant reconciliation of conflicting datasets, or simply a database that’s bloated and slowing down.
“Every time a team spends hours reconciling versions of the same dataset,” Jared says, “that’s a governance problem pretending to be a technical one.” True governance, he explains, ensures continuity, traceability, and trust in your data.
So what does good look like? A strong master data strategy, Jared says, starts with structure and discipline but it must also be flexible and scalable.
“You don’t want this big overbearing master data management strategy that’s not fit for purpose,” he explains, “You also want one that can grow as you need to grow. And there is quite a difference between a master data management strategy for a junior – it probably doesn’t exist – versus a major mining company.”
Ultimately, a strong master data strategy ensures that information remains trustworthy and auditable. As Jared concludes, “The question has to come, how do you know that you can trust your data? How do you know that what’s being served up to you has not been tampered with, calculated, or just made up? If you can’t define your data, you can’t trust it. And if you can’t trust it, you can’t make decisions with confidence.”
Reliable data management, he adds, requires both structure and stewardship. “It’s having a system in place to manage your data, but you’ve got to maintain that and make sure that there are good boundaries around it.”
Jared also emphasises that management and governance are human responsibilities: “It really is about the people. It really is about giving people the ability and the skills to use the system properly, but to use it efficiently.”
Teams should establish a regular rhythm for reviewing and updating procedures. “You say, okay, we’re in the month of January… we just need to spend a couple of days, we’ll review the document, see if anything’s changed,” he says. “Businesses do change their priority, their emphasis, and the data management needs to reflect that.”
“It’s having a system in place to manage your data, but you’ve got to maintain that and you’ve got to make sure that there are good boundaries around that.”
Hear more from Jared Armstrong here:
Jaimee Nobbs (00:00): Welcome to acQuire Connected the podcast that is your compass in the world of data across environmental, social, and governance.
Welcome back to the fourth season of the acQuire Connected podcast, where we chat to thought leaders and tech experts who are tackling their data management challenges. I’m Jaimee Nobbs, your host for this podcast, and today I’m sitting down with Jared Armstrong, Managing Director at wHole Solutions. We’re talking about something today that might sound a little less flashy than the buzzword going around the mining industry for the past few years, which is “big data”, but it turns out that it’s absolutely critical – we’re talking about getting master data right and the role of it. Jared recently wrote a piece called Big Bertha: Beyond Rethinking Data Strategies for Mining Geoscience, and in it he argues that the real value for geoscientists doesn’t come from big data, it comes from getting master data right.
In this chat today, we talk about what master data means in a geoscientific context, why it’s more critical than ever and how a strong data management foundation can support better decision-making, more reliable AI outputs and smarter workflows in mining and exploration. We cover a lot today, so let’s get into it. Jared, thank you for joining us today. I wanted to chat with you about an article you recently published, Big Bertha and Beyond: Rethinking data strategies for mining and geoscience. You shared some really interesting thoughts in that, which piqued my interest and I’m pretty sure will pique a few people’s interest about big data being a buzzword that’s been around in the resources industry for a little while and why you see the real game changer for Geoscientists is more around having robust master data management strategies in place rather than focusing on that big data aspect. I’m excited to dive into this topic with you, but first can you tell me a little bit about yourself, the work you’re doing and how you’ve got to this point in your career?
Jared Armstrong (02:05): Sure, thanks Jaimee. Nice to meet you. Thanks for having me on. I have actually been in the mining industry for a long time. I first started when I was 17, but the only time I’ve been away from the mining industry was when I went to uni for a few years. So I worked in and around coal mines on site doing a whole variety of things from geotechnical to exploration to strata control to underground mapping to resource geology, resource management, all sorts of things. However, I was always sort of locked into the process of capturing data, chasing drill rigs round and then when the drill rigs go away on break, you’re putting the data in and then you’re back again chasing the rigs. And I had this aha moment when I saw a database company show themselves to managing not just the data but the workflows. They were able to semi-automate some of the processes and I thought you beauty, if I could get a like this, I could get back to doing what I’m paid to do and what I was trained to do and hopefully add value to the business using a geological knowledge to contribute to short-term planning, to the resource side of things, et cetera.
And since then I decided that I wanted a career change and I left direct mining and joined databases. So I’ve been pretty well doing that ever since. So that’s sort of how I got into this game. I’ve been a bit of, I guess consulting, contracting depending on who you talk to, what you do, but essentially if you could be really blunt, you’re a dog’s body, you do whatever you’re told to do. “We need this cleaned up” or “can you fix this – it’s not working.”, or “look, we want to do an audit, can you give us an audit. This is what we’re trying to do.” I say yeah, absolutely. Or “we want to build a new part of the system”. So you come in and you apply your skills and hopefully leave the place better than where you started.
Jaimee Nobbs (03:56): It sounds like you’ve had a really interesting career. You were telling me just before that you actually used to work at acQuire and so now you’re part of the Nova Network with your own company that you are running. How has that experience been?
Jared Armstrong (04:10): Daunting, launching out on your own, having a go, but it’s been quite rewarding. It’s very challenging. Often you take a job on knowing that you can understand three quarters of it and that you’ve got to back yourself to work out the rest on the go. So fortunately I think it’s worked quite well so far. I’ve learned a lot about how things are done in the real world, so to speak, with database geologists and mining companies on all different sorts of commodities. So I think it’s worked quite well.
Jaimee Nobbs (04:43): Great. I think it’s really interesting with the way that the Nona Network is structured that you can go out to different mining companies, you get to see different problems and different challenges at different sites that may actually be the same thing that other people are facing. So at an industry level, you’d get to see a lot, which is quite interesting. When we talk about big data, perhaps we can tie it back to your article, but what made you write that article and why did you see the term big data floating around so much? What were people trying to achieve?
Jared Armstrong (05:17): Well, I guess I was seeing utilisation of big data in mining companies and in employing people at great expense to capture data from instrumentation and from core sensing devices, et cetera. And although that’s exciting what the technology can build, I began to think, man, there’s a lot of money being spent here, and I just wondered, is there as good a bang for your buck you get if you spend the same money on just looking after the basic stuff. The basic stuff is the master data, bringing the attention to those fundamental facts, if you like, that are required to power the decision making within the geoscience departments and the corporate world in terms of mining. Because look, at the end of the day, let’s face it, the value of a mining company is mostly defined by its in-ground resource and how do you know what’s in that ground?
You only know what’s in the ground by what you see. You can only see through what you capture in terms of data from drill holes and samples and results from logging and results from laboratories. So those are the bare bones facts if you like. We call those master data, but I just thought that although I know money needs to be spent certain in that area of research for big data and certainly artificial intelligence, I wondered at the value return, I guess, and we were seeing a neglect in terms of the master data in organisations. Myself and a colleague of mine were finding that we’re having to reemphasise the value in focusing on the master data rather than focusing on the next shiny toy that comes across the information management industry, of which there are many. And of course, I’ve been guilty as charged as dabbling in to try and understand, but I just thought that I was motivated to write there to try and bring a focus back, I guess, if that makes sense.
Jaimee Nobbs (07:16): Yeah, I think the way, and we will link the article, I’ll put that there now we will link the article, but the way that you position master data is getting the foundations right of your data management. It makes sense, but why do you think companies were seeing, or are seeing big data, as a silver bullet for things like better decision making and operational efficiencies? Why do you think that is?
Jared Armstrong (07:46): Look, I think that to a large degree there’s a temptation to try something new to make it easier. Geoscience for a lot of mining folk is an area they don’t like in the industry. It’s too difficult to quantify from an engineering perspective. It’s vague. It’s not sort of black and white and you can throw formulas and numbers to get a result. So if they could do away with the geoscience department and just have a numerical equivalent that computed what they need to do, they’d probably be quite happy with that. So potentially data science and therefore AI may be in some way a path to that, less employees and better decisions I suppose. But at the end of the day, these engines rely very heavily on information and data and the information is usually in place pulled together and prepared, I suppose, by people intelligence if you like.
So you can’t just expect an AI system to tell you what’s in the ground without any drill holes, so to speak. Someone’s got to do the hard work and spend the money to go and collect the data and then put guardrails in place to make sure the data is collected correctly, efficiently, and has been validated, et cetera. And then of course stored in a place where it can be trusted and you can go back and use it time and time and time and time again. So I don’t think you’re ever going to get away from having that need from a mining perspective.
Jaimee Nobbs (09:15): So this is a nice tie back to master data being where I guess the true power for a mining company lies in terms of better decision making, those operational efficiencies, if you’ve got really well structured, well governed master data in place, then that’s where you can actually see good results regardless of what technologies come into place. Am I understanding that correctly? So I mean you’ve kind of touched on it a little bit, but how do you specifically define master data in the context of mining geology? What actually qualifies as master data? How do you tell the difference between just any data coming into the company versus what is truly master data?
Jared Armstrong (10:00): Well, I’ve got about three answers to that, so let’s just try. Firstly, there’s master data and then there’s master data management, and you might think that master data management is just the management of master data. Well, it is, but it’s a whole lot more than that. I’ll get to that. So master data firstly, as opposed to say big data as opposed to reference data as opposed to metadata. Master data is different in that it has a different workflow. Each of these buckets of data types, you need to manage differently. So master data refers to the data that is the main data that you use for your business. In the mining sense, that would be hole IDs and coordinates. It would be the geological logging intervals and what’s observed in the hole, the geology, et cetera. It’s also the samples, the sample IDs, it’s the dates that are sent, it’s the result that come back from the lab.
These are coming into the database on a regular basis. It could be on a daily or weekly basis. So these bits of information are important for many reasons. However, a colleague of mine, Tony Shellshear and I got together and talked about this master data and that’s what that article was about. And he had come up with the concept of technical master data sets. So he’s taken the term MDM, master data management, and called it technical master data sets. And the idea here is we see that the master data takes its form as two master data sets. One is the original or the source master data set. The second is a working data set. Both are master data sets, but they’re both quite different. So let’s just talk about the first one. The first one is a source master data set.
It really contains the data that is collected as is. So I mean in its rawest form, it’s been validated. In other words, if it’s got mistakes, it’s corrected, but you’ve tried to keep it as raw as possible and you’ve tried to keep it as close as possible to what was collected either in field or from lab. The idea is that if you need to go back and validate from an auditing perspective further down the track, that that link is there and you can see aha, that piece of information, yes, I can verify it was collected on that date by that person, and here’s what actually was collected. It’s all there to be seen. So even if you have different units for the same analyte in time, you would store it as collected rather than trying to change the units. So that’s the source data set.
The working data set is a data set that has been unified, if you like, and it may in some cases have interpolations involved in it. For example, downhole surveys, if they’re missing, you may be able to do some basic interpolation, but essentially it represents a data set that the business can use. All the downstream users come back to when they need it for their own purposes, whether it be for marketing, whether it be for resource development, whether it be for environmental, they can all come back to that one signed off data set that they know is true that they know that they can rely on and that they can use to develop or build whatever they have to do, the short-term plan, et cetera. Does that make sense?
Jaimee Nobbs (13:27): Yeah.
Jared Armstrong (13:28): Yeah. So that’s the third element if you like to master data in terms of mining. The fourth thing I would say, and if I could just put a plug in for the AIG symposium, the Australian Institute of Geoscientists, are actually running a one day symposium and it is on that topic of data management, but specifically master data management, and that’s on the 31st of October here in Brisbane. That’s been done by the AIG and myself and Tony are organising that we’re collecting speakers to try and embellish or tease out that concept of master data management and why we think it’s still of great importance to us to geos certainly into that mining space in Queensland, Australia, I guess wherever else you are requiring to deal with data from a mining perspective.
Jaimee Nobbs (14:13): That sounds really interesting, it’s very topical. Then I can also pop a link in the show notes would be great. There is a website for that. Yep. Perfect. So then I guess we’re not specifically looking at master data itself, then we’ll be looking at the master data management. Why do you think a strong master data strategy is so critical for people to make decisions in the mining industry? What happens if you don’t have a strong strategy in place?
Jared Armstrong (14:41): Well, I guess the decisions are made based on the data and the data has to be in place. It has to be reliable. The master data management really provides the guardrails, if you like. So that data continues to be streamed in using the same principles, the same processes. The master data management strategy dictates where the procedures are stored, that they’re updated regularly. They dictate the certain key performance indicators for the data collectors and maybe the database geologists to make sure that the process continues, but also the system continues. It’s having a system in place to manage your data, but you’ve got to maintain that and you’ve got to make sure that there are good boundaries around that. Things like certain people can have certain levels of access to the system so that the database is backed up regularly. That’s a fairly simple one, but you’d be surprised.
I remember coming across, sitting in a boardroom once with an organisation that were keen to get a system in place and their motivating factor was the fact that they had completely lost their coal quality database. It had been corrupted and they hadn’t had a backup and I could not believe what I was hearing across the table that something like that could happen. Now these data sets come together at the cost of millions, sometimes tens, possibly hundreds of millions of dollars to collect these and this data set just gone. They were able to cobble together components of that data set back, but I can’t believe that in an instant without those guardrails, all that information and the money that was spent just gone like that. So I think it’s important that mass data management strategy is important. It oversits the system. It really is about the people. It really is about giving people the ability and the skills to use the system properly, but to use it efficiently. Yeah. Does that explain the management?
Jaimee Nobbs (16:37): It does, yeah. It makes it pretty clear that, like you said, guardrails is a really nice way to describe it. So it’s not dictating exactly how someone, well, it does to a degree help support someone putting in data, making sure it’s validated and complete, but also about protecting the data, making sure that you’ve got backups of it so that if something happens, you do have a fallback almost. So you’ve given a really good example there of a company that’s lost a data set. How can teams recognise when their master data management strategy is weak? What would be some early red flags for a company that’s potentially looking at their own data management practises now, for example?
Jared Armstrong (17:24): I would say if you’re getting an increase in complaints from your customers, now we often think about customers as someone outside a company. Do we deliver a product to someone that pays the money and they get product from? But in the mining world, certainly in geoscience, and if you’re involved in the database, your customers are not someone outside the company. Your customers are downstream from you, but in the same company. And if you are getting lots of complaints, why is this data not here? It should have been there last night or yesterday’s or last week’s data was on the data set. Why these are sort of typical questions or I can’t get my data out. You might think you want to blame the database. It always gets a blame. But if you look deeply at the problem, often there’s been a failure in this system, the master data management system or the strategy as such, the process behind it.
So when you start to get an increase, I would suggest that’s one queue that you probably need to be addressing your governance and management of the system as opposed to the actual system itself. If you’re finding you’re getting bloat, the database is growing and it’s just slower and slower and slower, that would be another cue if you like to suggest that maybe it needs some attention more from a technical and a system perspective, but potentially some of the original systems and processes that were set up maybe are not being reviewed or enacted as they should. The question has to come, how do you know that you can trust your data? How do you know you can trust those assays that you’ve got and that you’re just putting inside a model to generate a pretty picture or a wire frame or something like that? How do you know that what’s being served up to you has not been tampered with, calculated or just made up?
And the next question is, what are the ramifications of getting it wrong? I mean, you think about a resource statement that’s based on information that’s collected. If you get that wrong, it’s dreadful, it’s diabolical. And this brings to the fore the whole issue of data integrity. You need to be able to sign off on the data integrity, I believe, as much as you need to be able to sign off on a qualified resource statement. So I think the future, we’re going to see a lot more focus on how do you know, you know that what you’re doing is what you have is correct and true. And I think there’s going to be more pressure certainly in that database space for people to actually show cause that what they have is real, is true, is original. So that’s just two that come to the top of my head as cues to look out for.
Jaimee Nobbs (19:59): They’re very practical. It’s something that people can look out for and understand very quickly whether they’re in that position or whether there are things they can do to improve their data management strategies. So we’ve talked about when it’s in a bad place, but if we’re talking about a strong master data management strategy, what are the core pillars that make this up? If we look at it, for example, for a team starting from scratch, how would a company set up the core pillars of a strong master data management strategy?
Jared Armstrong (20:36): That’s a good question. I’ve had to think about that quite a bit. I don’t know whether I’ve got a full set of answers there, but I would say firstly, it has to be able to scale as the business grows or shrinks, and I’ve seen both, but businesses are growing and they shrink. You don’t want this big overbearing master data management strategy that’s not fit for purpose. You also want one that can grow as you need to grow. And there is quite a difference between say a master data management strategy for junior, it probably doesn’t exist versus a major mining company. That’s one of the first things. The second thing is it needs to be reviewed regularly. I mean, you work out that cadence if you like, possibly be 12 months or so, but to go through and commit to making changes and upgrades to all your processes rather than leaving them stranded and out of date.
If you are subjected to an audit and you have a good master data management plan in place, ideally, you will pass with flying colours. So with that in mind, as an owner of a database or as a senior database geologist and you are responsible for the data management, you can actually employ audits to bring to the fore issues that there are problems that need to be addressed. Maybe it can be documented and part of a, to get more resources from management per se, to get things fixed up or sorted out or maybe an extra person on the job that you need to help maintain the system. But at the moment, that’s all that I can come up with. I know there’s a lot more there, but certainly that’s the start.
Jaimee Nobbs (22:15): I really liked you talking about the cadence, but one thing that I’m interested to know is it’s one thing to prioritise making those upgrades and those changes, but how important is it that people find time to do that during their routine operational day-to-day work rather than when they’ve already lost a dataset?
Jared Armstrong (22:38): Yes. Well look, there’s a time and place if you’ve lost a data set, that’s a different problem.
And you just slot it in. You say, okay, we’re in the month of January, end of the month or middle of the month, we just need to spend a couple of days. We’ll review the document, see if anything’s changed. And often it does. Businesses do change their priority, their emphasis and the data management needs to reflect that and they need to serve the needs of the business as those changes take place. So it’s just a case of slotting it, in my opinion there a time slot, getting your team together, working it through, and you need to be able to listen to those on the ground that are doing the work too and make sure what they’re seeing and doing if it’s changed and different, what’s more important to them this year as opposed to last year and capturing that within system.
Jaimee Nobbs (23:28): So the business needs dictate how often that cadence really does need to take place. If the business is changing rapidly, you might have to do it a little bit more rapidly, if that makes sense. Possibly. So we’ve gone now right into master management strategies and what a weak one looks like versus a really strong one set up and the differences. But I still would like to go a little bit into the big data space. Where does AI fit into this? Can we trust its outputs without first getting our master data right? If we feed a machine enough information, is it able to pull out results with confidence if we don’t quite have our master data in place?
Jared Armstrong (24:18): What you raise is a good point, and you will have heard the adage garbage in, garbage out. I mean, I remember listening to that when I was, I think going uni, but it still holds true. We’ve also heard of the term hallucinations, which relate to AI. It’s when AI starts to produce its own interpretations, its own information, and it’s completely divorced from the reality. So don’t want that. You want to minimise the ability of the system to hallucinate and that needs to be reliable, it needs to be validated, and this is what we’re talking about the most, data management. It is important, but AI is not just about regurgitating information. I think there’s places where we can use AI anomaly detection. If you can hook AI up, say the database systems and that you can queue and train the system to know what to look for in terms of data that might be sort of out of whack or quite right or lists that are no longer relevant. I think there’s great value in being able to use AI for those sorts of ends, using AI to write scripts for you very specifically. If it has access to a database, potentially it can do that and pull reports out for you. But as you mentioned underlying this, the data has to be validated, corrected, trustworthy, et cetera.
Jaimee Nobbs (25:33): That’s a good perspective actually. Not just using AI as an output to make decisions, but if there are ways that you can use it to improve your data management in the first place. That’s right. I mean it’s flipping the question perfectly. So yeah, thank you for that. That’s a really interesting perspective I hadn’t really considered. I do have one last question for you just to wrap up. I guess a lot of whether we talk about big data and future technologies coming into place and all the exciting opportunities for these pieces of technology, and you’ve kind of mentioned though that having people do the hard work, the human side of it, still very much so we’ve got the geologist at the heart of this. As data becomes more central and more important and we’ve got bigger data sets coming in and more senses logging information, how must a geologist’s role evolve to keep up with all of these changes?
Jared Armstrong (26:40): Big questions. The future.
You may have heard of a guy called Andrew Ung. He founded Deep Learning AI and he also was involved in starting I think what they call Google Brain. He’s also a teacher on some of the online massive online learning platforms for machine learning. And he made a statement, AI is a new electricity. When you think about that, electricity was a game changer when it first came out, certainly for industries. But secondly, we just flick a switch and we move on in life, we don’t think about it. Electricity is all around us and we just accept it as the norm. It’s in the power lines above. It’s on our rooftops, it’s in our batteries, it’s in our pocket, it’s everywhere. And we just expect that that’s part of modern living. You think about that, what does electricity rely on? It relies on copper, right?
All the wiring in your home that allows your phone to be charged, your computers to run your new battery car to be efficient. And you think about that further, getting that copper in place relies on the electrician. The electrician has to lay the cable, he has to test for shorts, they have to liaise with government authorities to connect to power. They’ve got to put safety systems in place. They’ve got to ensure everything is done to a standard. That’s the role of the electrician. And yet all of that is in behind the ability to just flick a switch that you and I think of as normal. Well, I think forward and about AI is being the new electricity and you think ai, it’s going to be a place where we just don’t think about it. It’s kind of done. Decisions are made on our behalf, et cetera.
I think if AI is going to be the new electricity, I think data is going to be the new copper. So when you think about data, we’re talking specifically about master data. You’ve got to serve the stuff up. It’s got to be done quicker than what is being done now. It’s got to be more reliable. And you think about what gives AI its powers. Firstly, it’s reliability, but also its relevance. So we’re going to have AI systems that are going to be reliable and relevant to our workspace. Certainly in mining, we’re going to have to have robust data systems underpinning them so they don’t come to the fore. The data systems actually go to the back, but they’re nonetheless more important, if that makes sense, rather than less important. And I also think of the geoscientist for the future. He’s going to be like that electrician.
That geoscientist is going to have to do things within the guardrails to a standard liaise with government officials. You’ve got to push your data back up reporting wise if you in expiration leases, once you let them drop, for example, and for the geologist, that’s going to be useful if you like in the mining space, that geologist is going to have to be able to look at that screen and the results will come back and go, nah, don’t trust that. The geologist has to be able to ask the question and say or challenge the answer. I should. And they need to be able to look and know where that data has come from. So I still think the modern geologist is still going to have to have his feet firmly planted on the ground in terms of having experience with collecting data in the field, whether it be mapping, whether it be soil samples, whether it be logging in the core shed or out on site next to the rig. You need to understand how that is collected, where it comes from. But also the modern geologist also needs to be really queued into his machine learning data scientists, his peers, and asking the question over the fence, if you like to say, how can I do this better? I think there’s a lot of big synergies there to be had by these two disciplines talking to each other and putting systems together that can contribute to making those data systems more efficient. Does that make sense? Is that a metaphor that kind of gels?
Jaimee Nobbs (30:37): For someone that is not a geoscientist or a geologist, it makes perfect sense. It really nicely ties in the role of future technologies, the role of data, the role of the geologist and the geoscientist, how they all can work together. Well. It’s not something that’s taking over people’s roles or anything. It’s how they all can work really well together to make decisions faster, more confidently and mine more efficiently. I think it’s a great analogy and it’s a really great place to leave the podcast for people to think over that analogy. As I mentioned earlier, I will link to the symposium that you’re running in Queensland at the end of October, and I will also link to the article. So if people want to have a read of the original article, they can. I know we’ve covered a lot of ground, so it was hard to cover most of data management and big data and the future of mining essentially into a couple of questions so I am sure there’s some gaps and I’m sure I’ll have questions for you at a later stage as well. But thank you so much for joining this episode, Jared. I really appreciate it.
Jared Armstrong (31:52): Look, thank you Jaimee. And I do have plans for a few more LinkedIn articles to be pushed out of the same nature and we’ll just have to wait and see. But again, one last plug. Don’t forget that Symposium on Data Management on the 31st of October here in Brisbane, if you can. Thanks, Jaimee
Jaimee Nobbs (32:11): That’s a wrap on this episode of acQuire Connected. Huge thanks to Jared Armstrong for joining us on this episode. And if it’s got you thinking differently about your data management strategy, be sure to share it with your team. And if you haven’t already, hit subscribe so you don’t miss out on any future episodes. Thanks for listening and we’ll catch you next time on acQuire Connected.
Thanks for listening to the acQuire Connected podcast channel. Find us@acquire.com au.