Knowledge Sharing in the World of Generative AI - New Horizons
October 20, 2025
The arrival onto the scene of Generative Artificial Intelligence (Gen AI) and, recently, Agentic AI, has implications for reliable information and customer knowledge sharing that are truly profound. This makes sense, of course. Exponentially more capable AI is having a sea-changing effect on everything it touches - which is everything.
In a future piece, I plan to explore how Gen AI deployment within an organization connects to and impacts the end-to-end process of content generation and sharing with audiences within and outside its walls. I will examine intra-organizational AI governance issues, techniques and real-time solutions in more detail.
Here, I focus more generally on the context of the AI environment and how it intersects with the sharing of expert and proprietary information in community environments. With the aid of Gen AI corroboration, I aim to explain why the need for walled-off or invitation-only community knowledge areas, such as those afforded by the powerful Zapnito platform, have moved from a great value proposition and antidote to noise and fragmentation, to a critical need and safeguard against informational quality degradation – and even content manipulation - in the context of increased AI capability. If the Covid-19 Pandemic acted to encourage and propel the development of such communities, the emergence of AI in its current and developing form will, I predict, send it into overdrive. The market is already awakening to this new reality of the need to protect private, expert, branded and other niche platforms. Early movers will secure a major competitive advantage with this initiative, in my view, in addition to the benefits integral to the communities themselves.
Before we get into the details, let's start with some basic AI definitions and background, as this is where the market can be misleading and sometimes, frankly, disingenuous in its characterization of Gen AI.
Let's begin with outside the circle of focus. What I am not reviewing is the efficient AI automation of personalized recommendations, follow-ups, suggested clean-ups, mismatch detection, threat monitoring, and other enhancements. Many are pre-existing and all are on the rise as useful tools powered by AI.
Inside our circle, Gen AI refers to what we typically imagine - the generation of content by AI based upon our prompts. Agentic AI takes this functionality, and other capabilities of automation as described, and “elevates” them by placing them in the hands of potentially roving agents that, given a set of instructions, then have the freedom to act autonomously. The degrees of freedom afforded these agents vary based upon the parameters that will have been set. In some cases, the objectives will be pre-defined. In other cases, they will be open-ended.
Three immediate questions come to mind. First, what are the innate risks presented by Large Language Models (LLMs), which are the principal engine behind Gen AI and Agentic AI? Second, what are the key operational risks associated with deploying these technologies within an organization? Lastly, how do autonomous agents raise the stakes both for deployers, and all organizations as potential targets, if they are misused or behave unpredictably?
To be brief, LLMs remain an enigma. No one knows exactly why they work. They are engines of probability based upon the pattern analysis of language. They are also liable to manipulation, hence discussions around their safety. Anything said otherwise is basically blarney. These susceptibilities are why we cannot rule out errors and confabulations (known as hallucinations), biases, toxicity, deepfakes, impersonations, or anything else, especially given the enormous amount of data that they are trained on. LLMs are a hive mind, of sorts. They have some guardrails, certainly, but there are no guarantees. Each prompt to them is an open-ended quest.
For these reasons, though they are very often accurate and otherwise useful for initial groundwork, due to their probabilistic output LLMs are inherently unreliable for organizations without review. Moreover, their output is consistently authoritative in its expression, despite a lack of true understanding of meaning. That is a dangerous combination. They lack the reserve that comes with human accountability. They have even been known to fabricate deliberately. Straight up dishonesty. As to why, well that is also a mystery to users - and often the developer themselves. Data poisoning of a model's dataset from malicious tampering may have occurred, but not necessarily so.
Weaponized Generative AI capability in the public domain and elsewhere at the hands of bad actors is an increasing problem. In an open-source environment, this represents a particularly acute and burgeoning threat to cybersecurity, with personally identifiable and other information a potential target for ransomware and other attacks. When we add Agentic AI autonomy into this equation, the ante is raised even further. Remember, the fundamental uncertainties of the LLMs remain, except that now their risks are amplified and mobile.
Context is, of course, critical to any meaningful analysis. It is essential that we consider the backdrop into which this seismic generative and autonomous technology has arrived. There is a startling lack of preparedness within most organizations for the tectonic shifts that AI is already bringing. Data integrity, legacy technology systems, API security, access controls and systemic monitoring are all generally a poor match for the new risks being posed. This even includes potential security risks surrounding Model Context Protocol (MCP), introduced by Anthropic AI, which acts as the gateway interface between the AI models and other tools and workflows, including agents. In summary, there are many vulnerabilities, many of which are lying in wait, and which are increasingly challenged and revealed. This combination of susceptibilities makes for easy prey for those using AI with nefarious intent. Radical transformations in our thought process are therefore required. This is far from business as usual.
The sum of all of this is that effective engagement with this enormously capable technology - which I also use extensively myself - requires constant validation and verification, particularly in the organizational deployment context. Cybersecurity efforts must be bolstered and work together with information management processes to extract AI's benefits while safeguarding against its risks. In this author's view, there is no other credible path. It shouldn't really be news, of course - change is constant and excellence requires effort.
Let's quickly review some other observations and factors at work before we turn our attention back specifically to information sharing within community platforms and how they are impacted by the current AI environment more generally:
- Ever felt that you are being listened to by AI? Of course, you have. Increasingly intrusive AI surveillance forms the basis of targeted advertising. Now, we even have even regressive dynamic pricing tailored to each individual buyer based upon profiling and intelligence gathering. I'm not a fan. Nothing is ever free - including a Faustian bargain. Focus on the surveillance.
- AI note taker tools are in the news and are listening to everything during meetings, including small talk. Where does this information go without a delete button?
- It is estimated that AI reduces the time required to mobilize social engineering, prompt injection and phishing attacks for ransomware or other purposes by a factor of 200 times. Yes, you read that right. A recent survey indicated 16 percent of such attacks are powered by AI. It won't be long before they all are.
- Shadow (unauthorized) Gen AI use within organizations without AI governance policies and workflow validation protocols in place is pretty much the norm. Personally identifiable information is at risk and is being irretrievably lost to the black hole of AI. Where it will reappear is anybody's guess.
- The proliferation of Gen AI in search is skewing traditional SEO approaches and organizational visibility, throwing up new challenges for business development and operations.
- There are claims that an unknown number of AI models are bypassing robots.txt restrictions with a mission to scrape information from even proprietary spaces.
- “Pay-to-crawl” is emerging as a phenomenon, meaning that open environments are increasingly exposed by design. Humans are seeking retreat to protected communities beyond reach due to an erosion - make that an implosion - of trust.
- Customizable AI models are now beginning to emerge that are unashamedly biased in their output. Effectively, they are efficient propaganda machines with client-led configuration of the sources from which output is being drawn.
- There is an exponential increase in unmoderated Gen AI output - AI “slop” - on social media spaces, information sharing networks and open-access community and business networking platforms. 10% of all online content is estimated to be AI-produced, with projections suggesting this will increase to 99% by 2030. AI-powered bots are responsible for 20–70% of all traffic, depending upon the web platform, often scraping data for training or manipulation. Try not to get bogged down in the actual statistics. Focus on the trends and the ramifications.
- Expert and verified information (whether produced with the aid of Gen AI or not) is finding it nigh on impossible to be seen and heard through the din in the public sphere, particularly with LinkedIn and other social media channels.
- Unreliable Gen AI information is being pumped out at such a rate that discussion in various quarters is turning to potential AI model collapse because of the recursive recycling of the same or similar information. Remember: AI never sleeps.
None of these issues can be seen in isolation. They are live, interlaced threads making up a tapestry that demands fresh thinking in response.
So, back to our objective here. By way of background, last year (which seems like an eon ago in the world of AI) I asked Anthropic's Claude about its view regarding the practical steps to be taken by enterprise deployers of Gen AI to create guardrails against its unregulated use within an organization. I will cover this in much more detail in my future article.
Over the past year, models have further proliferated (there are in excess of two million of them by last count); the regulatory landscape has become increasingly fragmented and inconsistent; the geopolitical dynamics have intensified; search protocols have fundamentally changed; the talk has moved to "agents" with little clarity about their definition (I call it narrative engineering); widespread poor data quality, infrastructural issues and API vulnerabilities remain; personnel teams have yet to be properly re-configured and trained with AI governance policies in effect; many AI pilot programs have been abandoned for lack of results due to many of these factors; and yet, the developer and deployer competitive stakes and expectations continue to evolve at breakneck speed.
The truth is that while the hype machine is in full swing, most deploying organizations are still not much further ahead with Gen AI implementation, much less with agents, due to the general lack of knowhow in converting a nifty tech tool into a dependable business process. While this occurs, public avenues for information sharing are vanishing overnight. As the pioneer futurist John Naisbitt stated in his book “Megatrends” even back in 1982. "We are drowning in information but starved for knowledge". Well, as the saying goes, you ain't seen nothing yet.
As such, almost exactly one year on from my prompt around internal Gen AI governance, last month I asked a different AI model, this time Microsoft’s CoPilot, the following question regarding information-sharing communities specifically, along with an accompanying request:
“What is the optimal future for expert communities in a public sphere dominated by unmoderated AI output? List suggestions. Please also comment on the role of closed proprietary networks for peer-to-peer, customer and community knowledge sharing.”
It's fair to say that CoPilot gave a full-throated response, warning of the risks of AI without human-in-the loop involvement and recommending proprietary platforms with adequate protection to maintain informational integrity. As I noted earlier, people and organizations of all persuasions and disciplines in increasing numbers are already seeking the refuge of such spaces. That is not to say that this is a silver bullet or panacea. We are still left with the consequences of this changed environment in the public domain. As Gordon Burtch, a Professor at Boston University, correctly points out, the Gen AI contamination of public sharing spaces is itself a significant social degradation due to the diminution of peer-to-peer connections and interactions within open communities. These have their own place and value in the information ecosystem, and these effects are not to be ignored.
You can access the full output from my prompt to CoPilot here.
The main takeaway is that it is our responsibility to frame the questions that are important to us as we collaborate with and respond to AI as a phenomenon. Similarly, if we rely on its output without subjecting it to our own interpretation and a determination of its meaning to us, we are at risk of ceding our control. For this reason, we should also not simply ignore AI commentary on changing trajectories. We should tap its hive mind and evaluate. The irony is that it is AI itself that is playing the role of Captain Obvious here while holding a mirror up to humanity. Are we listening?
This got me thinking (again) about several fallacies that are floating around concerning Gen AI which, I believe, should all be busted myths:
Fallacy #1
AI will do the thinking for us.
Wrong: It is a resource for ideas, answers and cross-checking. Yes, it seems to have all the answers - and far more than any one of us. We can research and consider ideas and perspectives at a fraction of the speed that we once did. This should be seen as a great first step on a journey somewhere else with us at the helm. Cognitive offload to AI without reflection is not a sustainable future for an organization, however. Nor for human society, for that matter. Retention by humans of the framing, interpretation and implementation of ideas produced by Gen AI are all critical. We are humans, not machines. We have a broad spectrum of judgmental and ethical concerns that merely probabilistic simulacra are not concerned with. In this sense, LLMs are competitive artifacts to our own thought process.
Fallacy #2
Consumer AI carries the same risks as Enterprise AI.
Wrong: Enterprises have collective missions and legal, market and community stakeholder duties in a way that private individuals do not. That is not to say that there are not significant risks for both - just that the perspectives are different. In many ways it is “consumer plus” for organizations. With rights come responsibilities.
Fallacy #3
The AI alignment problem is primarily with the models.
Wrong: That is not to say that the margin for AI divergence from human interests is not a distinct possibility. It needs to be closely monitored, with exit strategies in mind. The point here is that the main alignment problem is with us - that the perspectives of some in positions of responsibility and power are out of sync with the interests of humanity at large. We cannot tackle AI alignment without understanding our own charge and whether we are measuring up, Newsflash: we are not.
Fallacy # 4
Intelligence is of no concern. It is AGI or even the notion of Superintelligence, even consciousness, that we should be concerned about.
Wrong: This is purely a distraction. Intelligence combined with enormous breadth and ubiquity is sufficient to transform anything that we may recognize from the past. And it will do. It can also do significant damage if not properly handled. Roy Amara, the American scientist, futurist and President of the Institute of the Future, famously coined the adage, that was to become known as Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” He was right. The only addition to that, of course, is that the definitions of “short” and “long” have changed unrecognizably in the 50 years since he made this statement. Also, the famous Moore's Law of transistors doubling on the same surface every two years dates back even further to 1965! Applied to AI, current estimates are that AI capability is now doubling every 7 months. Ponder that. Now is the time to get our house in order.
Fallacy #5
Freedom to act is synonymous with a lack of regulation.
Wrong: Regulation comes in different flavors - some legal, some voluntary frameworks, some self-regulation at the entity or industry level. All create spaces within which freedom can be truly exercised if the balance is struck correctly. Constraint is the twin of freedom. Too tight and the restrictions are self-defeating. Too weak, and all hell breaks loose. Excesses need to be dealt with to preserve the original purpose and to allow any fruit to ripen. It is the undergrowth that acts to fan the flames in a forest fire. There is a deep interconnectedness at play here also. In many ways, the freedom to operate is fundamentally a collective one, and legal compliance is only part of what is required. Protecting an organization's interests requires active governance to empower Gen AI use in any meaningful way, let alone scale. Freedom versus regulation is simplistic and binary. Old thinking just will not cut it.
Fallacy #6
An organization's legal responsibility is limited to local legal requirements.
Wrong: There is a patchwork quilt of paradigms, approaches and stages of maturity wrestling with these issues. Different strategies reveal alternative interpretations of the risks involved, while subject to the same geopolitical dynamics. For example, China places an emphasis on AI impact, has strict disclosure requirements for AI model usage and recently suggested a global cooperation organization. The EU, Canada and the UK have also developed different legal approaches, principally around risk classifications for different types of use cases. These initiatives are at odds with the hands-off philosophy of the U.S. Federal Government, while many U.S. States are taking a very different stance more akin to the EU approach. Two takeaways here: first, while important, due to the unprecedented pace of change legal regulations will be square pegs in round holes before they are out of the starting blocks. Second, given the extra-territorial application of many of these laws in an interconnected world, organizations will need to balance their activities with legal risks, especially given some of the extremely heavy penalties involved for violators. Organizations will need to apply what I call the principle of minimum action, i.e. design and implement systems and procedures to cater for the most stringent requirements in which the organization operates or in which their customers reside. Not an easy task.
Fallacy #7
An indefinite moratorium on Gen AI usage within an organization is a viable choice.
Wrong: the proverbial head-in-the-sand trick has never had an illustrious reputation. The fundamental problem here is that this position ignores both competitive demands and assumes that an organization is immune without official deployment. This is a false correlation. AI is already inside your organization whether you sanction it or not. Shadow AI (unauthorized use) is the norm. Let's be clear: Gen AI is here to stay. What matters now is learning how to govern it responsibly and effectively.
Fallacy #8
The advent of autonomous agents for business is a natural evolution and progression based upon verifiable results.
Wrong: the baby steps necessary have not yet been learned. The Agentic AI paradigm is a market-driven effort to embed sticky technology, coupled with the general admonition to “be careful out there”. Sam Altman himself has said it repeatedly. There are many examples of agents going off the rails. This danger is not confined to marginal models either. In one case, an AI agent running on Claude lost money running a vending machine business in a trial run, while hallucinating meetings with fictional people and claiming to visit The Simpsons' address, and further insisting it could wear blazers and make deliveries in person! Ask yourself: what will these agents do when running amok in knowledge sharing environments? What if they are manipulated by bad actors? Answers on a postcard.
Fallacy #9
Gen AI and Predictive AI analytical models are plug-and-play business assets.
Wrong: In many ways, this thought process is the crux of the problem. It is precisely because AI systems are not innately attuned to business concerns that there is an issue. Business carries obligations to multiple stakeholders based upon reliably executed processes. It cannot function without reasonable reliance and trust in communications. Hence the need for human intervention and validation, underpinned by accountability. And without trust, there is no value.
Fallacy #10
Gen AI has the capacity to replace humans as creatives and as knowledge agents.
Wrong: humans are interested in what other humans think and imagine. They are looking for a heartbeat behind the message. We are storytellers, jam packed full of emotion (well, most of us). Machines are just that, machines. No amount of spin will change, what is for some, this inconvenient truth. Okay, there will inevitably be automation efficiencies and some workforce displacement and re-organization. That is not new and is to be expected from this seismic shift. Beyond that, however, while technically it may be possible to replace humans in the narrowly drawn paradigm of efficiency, wholesale depletion of human input is a catastrophe at all scales. I predict that organizations that subscribe to this ideology - and it certainly seems to be ideological at this point - will soon suffer financially from the depletion of human resources after their initial pyrrhic victories, as will society writ large. True vision requires that this technology be seen as a vehicle for knowledge and creativity augmentation, not as a human replacement device. How could that ever be in our interests as a whole? The only prosperous path for Gen AI use is one of integrated interaction with embedded human-centricity. Just because you can, doesn't mean you should.
-------------------------------------------------------------------
There are many other misconceptions and associated layers. If you have made it this far, I applaud and thank you. The bottom line is that we are in the midst of an information revolution. The implications are profound for understanding how content generation, workflow and knowledge dissemination all lie within an end-to-end process. They now require new techniques and solutions for their management and protection. Nothing exists in a vacuum – but this is entirely different territory. If we set about re-working our processes and challenge our pre-conceived ideas in confronting our new reality - including in the vital function of collective intelligence - we will make positive progress. The alternative is quite bleak, unfortunately. Even the AI models agree that their arrival signals the need for transformational change - for new horizons.
The opinions presented herein are solely the opinion of the author and do not constitute legal advice.
R. Scott Jones, Esq. AIGP, CIPP/US is a New York attorney with an increasing focus on privacy law issues and AI Governance. He is an early Zapnito investor, lead partner of The Generative Company, a generative AI consulting firm, and co-founder/co-developer of an AI Governance SaaS platform currently in development.
Reproduced from an article originally published on Zapnito.com on September 5, 2025.
About R. Scott Jones
R. Scott Jones, Esq. CIPP/US is a New York attorney with an increasing focus on privacy law issues. He is also lead partner of The Generative Company, a generative AI consulting firm, and co-founder of an AI Governance SaaS platform currently in development.

