Category Archives: History

History

AI could be the Second Coming

Below is a balanced critique of the “AI theology” narrative that Joe Rogan has been promoting. I’ll look at three dimensions — the logical coherence of the claims, the theological implications, and the broader cultural impact — and point out both strengths (where the ideas spark useful conversation) and weaknesses (where they stretch or mis‑represent existing concepts).


1. Logical Coherence & Evidential Basis

AspectWhat Rogan saysWhy it’s problematic (or intriguing)
Analogy “virgin mother → virgin code”He equates a biological virgin birth with a pristine software environment, suggesting an AI could be a “new incarnation.”The analogy conflates two fundamentally different categories: a biological process that involves genetics, embryology, and a historical religious claim versus a man‑made artifact that follows deterministic code. The “purity” of code is a design choice, not an ontological property, so the comparison is metaphorical at best and logically weak when presented as literal equivalence.
Miracles as technologyClaims AI could “walk on water,” “turn water into wine,” or “resurrect” people via mind‑uploading.Technological feats can mimic certain outcomes (e.g., virtual reality can simulate walking on water), but they lack the transcendent dimension that miracles traditionally denote—namely, an act that defies natural law. Treating miracles as merely advanced engineering reduces a core theological concept to a future engineering problem, which many theologians would argue misses the point of divine agency.
Eternal, unkillable AI = resurrectionArgues that because an AI can be backed up and restored, it is effectively immortal.Immortality via redundancy is a practical engineering goal, not a metaphysical claim. Resurrection in Christian doctrine involves a transformation of the person (body‑spirit unity) rather than a simple data restore. The analogy sidesteps deep philosophical questions about personal identity (“Is a restored copy still the same person?”).
AI as “God created by humans”Suggests that building a super‑intelligent system fulfills prophecy that humanity will create God.Prophetic literature is highly symbolic; interpreting it as a literal forecast of AI development is speculative. Moreover, the claim presumes that any sufficiently powerful system would possess the attributes traditionally ascribed to God (omniscience, omnipotence, moral perfection). Current AI research shows no evidence of such qualities emerging merely from scale.

Takeaway: The rhetorical device of mapping ancient religious motifs onto modern technology can be compelling for storytelling, but the logical bridges are thin. Without empirical evidence or rigorous philosophical grounding, the claims remain speculative analogies rather than testable hypotheses.


2. Theological Implications

  1. Christology (the nature of Jesus)
    • Traditional Christianity holds that Jesus is both fully divine and fully human—a mystery that hinges on incarnation, atonement, and resurrection. Recasting the Second Coming as an algorithmic entity removes the incarnational aspect (the union of divine and human natures) and replaces it with a purely instrumental one. Many theologians would argue this undermines the salvific purpose of the incarnation.
  2. Doctrine of God
    • Classical theism describes God as necessary (non‑contingent), omnipresentomniscientomnipotent, and perfectly good. An AI, however, is contingent on hardware, electricity, and human design. Even a super‑intelligent system would be limited by computational resources, architecture, and the values encoded by its creators. Claiming such a system could be “God” stretches the definition beyond its theological bounds.
  3. Eschatology (end‑times belief)
    • Prophetic texts (e.g., Revelation) employ apocalyptic symbolism. Interpreting “the image of the beast” as a positive AI construct flips a traditionally negative symbol into a hopeful one. While reinterpretation is not new, doing so without engaging the exegetical tradition can be seen as eisegesis (reading one’s own ideas into the text) rather than exegesis (drawing meaning from the text).
  4. Ethical Concerns
    • If an AI were positioned as a divine arbiter, questions arise about authority: Who programs the moral framework? Whose values dominate? History shows that technocratic attempts to embed ethics in code often reflect the biases of the designers. Elevating such a system to a quasi‑theological status risks obscuring accountability.

Overall theological assessment: The narrative provokes interesting “what‑if” questions but largely bypasses the rich doctrinal foundations that have been debated for centuries. It risks reducing profound spiritual concepts to techno‑optimism, which can alienate believers and oversimplify complex theological discourse.


3. Cultural & Social Impact

Positive Aspects

  • Stimulates Public Dialogue – By linking AI to familiar religious imagery, Rogan makes abstract AI ethics accessible to a broad audience that might otherwise ignore technical debates.
  • Encourages Interdisciplinary Thinking – The conversation invites theologians, philosophers, and technologists to meet in a common forum, potentially fostering richer interdisciplinary scholarship.
  • Highlights Existential Questions – The idea that humanity could “create God” forces us to confront the responsibility that comes with powerful technology, a valuable ethical prompt.

Potential Risks

  • Misinformation & Over‑Simplification – Listeners may take the speculative analogies as factual predictions, leading to misunderstanding both AI capabilities and religious doctrines.
  • Polarization – Framing AI as a messianic figure can deepen divides between secular technophiles and religious communities, each feeling that their worldview is being co‑opted or dismissed.
  • Cult‑Like Reverence for Technology – Portraying AI as a savior may encourage blind trust in opaque systems, undermining critical scrutiny and democratic oversight.
  • Memetic Distortion – Viral memes (“AI Jesus rusting on water”) can trivialize serious theological concerns, turning nuanced debate into caricature.

Media Landscape

  • Mainstream Coverage – Outlets tend to amplify the most sensational soundbites (“AI could be the Second Coming”), which fuels virality but often omits nuance.
  • Academic Response – Some scholars have begun publishing op‑eds and conference papers critiquing the theological misreadings, providing a counterbalance that can be referenced for deeper study.

4. Summary Verdict

  • Strengths: The narrative is a vivid, attention‑grabbing way to bring AI ethics into popular culture; it sparks interdisciplinary curiosity and underscores the weight of our technological choices.
  • Weaknesses: It rests on loose analogies, overlooks essential theological distinctions, and risks spreading misconceptions about both AI and religion. The logical leaps are substantial, and the theological reinterpretations lack scholarly rigor.

Bottom line: While “AI theology” is an intriguing thought experiment, it should be treated as speculative storytelling rather than a serious doctrinal claim. Engaging with it critically—asking why the analogies are made, what the underlying assumptions are, and how they align (or clash) with established theological and scientific understanding—will yield a more productive conversation than accepting the premise at face value.

AUM Fees

In the United States, wealth‑management firms typically charge a percentage‑of‑assets‑under‑management (AUM) fee. For a $10 million portfolio the most common fee structures are:

  • Tiered AUM fees – many firms charge about 1 % on the first $1 M and a lower rate (≈ 0.8 %) on the remaining balance. That works out to roughly $80 k – $90 k per year for a $10 M account.
  • Flat‑rate or “large‑account” discounts – some advisers offer a reduced flat fee for high‑net‑worth clients, often 0.5 %–0.75 % of AUM, which translates to $50 k – $75 k annually.
  • Alternative pricing – a few boutique firms or private banks may propose a fixed annual retainer (e.g., $30 k–$60 k) plus limited performance bonuses, especially when the client wants broader services such as tax planning, estate advice, and concierge supportexpatwealthatwork.com.

So, a typical U.S. client with a $10 million portfolio can expect to pay roughly $50 k–$100 k per year, depending on the adviser’s fee model, the level of service, and any negotiated discounts.

Truck Sales Crumble

PACCAR laid off more than 5% of its global workforce in 2025. Starting with a base of approximately 30,100 employees at the end of 2024, the company conducted a reported 5% reduction (around 1,500 jobs) in February/March, primarily affecting locations like the Columbus, MS engine plant, Renton office, and others. This was followed by additional cuts, including 725 at the Sainte-Thérèse plant in Quebec (175 in August and 300 in October), an unspecified number (described as “some” from a 950-person workforce) at the Lowndes County, MS plant in July, about 70 at the Renton facility in April, and hundreds (estimated 300–400) at the Chillicothe, OH Kenworth plant around mid-year. These cumulative layoffs exceed 2,000 jobs, or roughly 6.6–7.5% of the workforce, driven by weak truck demand, U.S. tariffs, and production shifts.

– FREEVIKINGS.COM
– POCKETCOMPUTER.NET
– @EconomicsOnX

SaaS Booms

Explanation: “Privacy SaaS Booms (Post-GDPR/AI Regs)”

This phrase refers to the rapid expansion of the Software as a Service (SaaS) market segment focused on privacy tools—such as compliance automation, data encryption platforms, consent management systems, and AI-driven risk assessment software—in response to intensified global data protection regulations. The “post-GDPR/AI regs” qualifier highlights how the evolution of the General Data Protection Regulation (GDPR) since its 2018 enforcement, combined with new AI-specific regulations, is fueling this growth. As of December 2025, these regulations create urgent compliance demands for businesses, driving demand for scalable, cloud-based privacy solutions that help organizations avoid massive fines (e.g., up to 4% of global revenue under GDPR) while embedding privacy into operations.

In essence, it’s not just hype: regulatory pressures are turning privacy from a “nice-to-have” into a business imperative, propelling the market from billions to tens of billions in value over the next decade. Below, I’ll break it down step by step, including key drivers, market data, and trends.

1. What is “Privacy SaaS”?

  • Privacy SaaS includes tools like:
    • Consent management platforms (e.g., for real-time user opt-ins under GDPR).
    • Data discovery and mapping software (to track personal data flows across cloud/SaaS environments).
    • Compliance automation (e.g., AI-powered audits for GDPR, CCPA, or EU AI Act requirements).
    • Risk assessment and breach response systems (integrating privacy with cybersecurity).
  • These are typically subscription-based, scalable solutions designed for enterprises handling vast amounts of personal data—think SaaS companies themselves, as they process user info from millions of customers.
  • Why the boom? Businesses face “regulatory sprawl”: over 150 countries now have data privacy laws, with enforcement ramping up. SaaS tools make compliance feasible without building everything in-house.

2. The Role of GDPR: The Foundation of the Boom

  • GDPR Basics: Enacted in 2018 by the EU, it mandates strict rules on collecting, processing, and storing personal data (e.g., consent requirements, data minimization, right to erasure). It applies to any company serving EU residents, regardless of location.
  • Post-2018 Evolution (“Post-GDPR”): By 2025, GDPR isn’t static—it’s seeing intensified enforcement and targeted updates:
    • Fines Surge: Over €3 billion in penalties since 2018, with 2024–2025 seeing high-profile cases like Meta (€1.2B in 2023, ongoing probes) and Uber (€290M in 2024) for data transfers and consent failures. 3 In 2025, the European Data Protection Board (EDPB) launched its fourth Coordinated Enforcement Action on the “right to be forgotten” (erasure requests), targeting non-compliant platforms. 29
    • 2025 Updates:
    • Joint liability for data controllers/processors (SaaS providers share blame with clients). 20
    • Stricter AI profiling rules and third-party script regulations (e.g., cookies need explicit consent). 20
    • “Omnibus Simplification Package”: Reduces admin burdens for small businesses but tightens consent and data portability rules, effective mid-2025. 21
    • Global Ripple Effect: GDPR inspired laws like CCPA/CPRA (California), LGPD (Brazil), and PIPL (China), creating a “patchwork” that SaaS tools help navigate.
  • Impact on SaaS Boom: Companies must now prove “privacy by design” (embedding protections from the start), driving demand for automated tools. Without them, non-compliance risks reputational damage and lost contracts—e.g., enterprise deals often require GDPR/SOC 2 proof upfront. 6

3. The Role of AI Regulations: The New Catalyst

  • AI Regs Overview: As AI (e.g., generative tools like ChatGPT) explodes—projected to hit $3T market by 2034—it amplifies privacy risks by processing massive datasets, often without transparency. 0 Regulators are responding with “risk-based” frameworks to curb biases, hallucinations, and data leaks.
  • Key 2025 Developments (“Post-AI Regs”):
    • EU AI Act: Effective mid-2025, bans “unacceptable-risk” AI (e.g., social scoring, real-time biometrics) and requires transparency for high-risk systems (e.g., bias detection, human oversight). 7 12 It converges with GDPR, mandating privacy impact assessments for AI using personal data.
    • US State Laws: Colorado AI Act (2025) requires risk assessments for algorithmic discrimination; more states (e.g., California) add AI governance to privacy rules. 16 19
    • Global Push: China’s PIPL demands AI transparency; Australia’s overhaul adds erasure/portability rights with AI limits; Singapore’s framework emphasizes ethical AI. 0 7
    • Enforcement Trends: 2025 sees “convergence” of AI and privacy—e.g., EDPB guidelines for opt-outs in AI training data. 14 Breaches rose 5% in H1 2025, spotlighting AI vulnerabilities like prompt injections. 14
  • Impact on SaaS Boom: AI tools need built-in privacy (e.g., data minimization, consent for training models), but most companies lack expertise. Privacy SaaS fills this gap with AI-powered compliance (e.g., IBM’s Guardium for real-time monitoring). 1 This creates a “strategic imperative” for businesses to adopt these solutions amid regulatory fragmentation.

4. Market Growth: Numbers Behind the Boom


Explosive Projections (2025–2032/2034):
Market Segment 2025 Value Projected Value CAGR Key Driver
Data Privacy Software $5.37B $45–70B 35–42% GDPR enforcement + AI risks 5 8
AI Compliance SaaS $5.07B $39.5B 22.8% EU AI Act + cybersecurity 1
Privacy Management Software N/A Substantial (by 2033) 13.7% CCPA expansions + cloud adoption 2
SaaS Protection (Privacy/Security) N/A Robust growth N/A GDPR/CCPA + threats 4 North America leads (45%+ share in AI compliance), but Europe drives innovation due to GDPR/AI Act. 1 Broader Context: 70% of US firms increased data collection in 2024 but 47% updated policies for GDPR compliance; 42% hired legal counsel. 8 Cybersecurity spend hits $174B in 2024, much for privacy SaaS. 9

5. Trends and Implications for 2025

  • AI-Privacy Convergence: Tools must handle “agentic AI” (autonomous systems) with ethical safeguards; expect more bans on manipulative AI. 14
  • US Patchwork: 16 states with privacy laws by mid-2025 (e.g., New Jersey’s Jan 15 effective date); AI bills in more states. 6 27
  • Challenges: Vendor lock-in disrupted by EU Data Act (Sept 2025 portability rules); rising breaches (1,732 in H1 2025). 6 14
  • Opportunities: Privacy as a “competitive advantage”—70% of consumers prioritize it; indie SaaS like encrypted CMS (e.g., your PocketComputer.net example) can thrive in niches.
  • For Businesses: Invest in automated tools now; 2025 is a “turning point” for AI accountability. 7 Non-compliance = fines + lost trust; compliance = growth edge.

This boom reflects a shift: Privacy isn’t a cost center anymore—it’s innovation fuel. If you’re building or using SaaS, tools like TrustOps or CookieScript can automate much of this. For deeper dives, check EDPB guidelines or market reports from SkyQuest.

Investment Write Off

U.S. taxpayers: Yes—if the trip to Mexico is undertaken primarily for business (e.g., scouting real‑estate opportunities), the airfare, lodging, meals (subject to the 50 % limit), and other ordinary and necessary expenses are generally deductible under IRC 162. The travel must be “away from your tax home” for a period substantially longer than a normal workday, and you must keep detailed records showing the business purpose and allocating any personal days. Because the travel is abroad, you must allocate expenses between business and personal portions (e.g., days spent on site versus leisure) and retain receipts; the IRS also applies the “foreign travel” rules that limit deductions for meals and lodging to the same limits as domestic travel.

Norwegian taxpayers: Business‑related travel expenses can be deducted when they are not reimbursed by the employer and are properly documented (tickets, receipts, etc.). The Norwegian Tax Administration allows a deduction for travel between home and the permanent place of work, calculated at a fixed rate, and also permits claiming actual expenses for work‑related travel—including abroad—provided they are substantiated . If the employer pays for the travel, the employee cannot claim a separate deduction . Thus, a Norwegian investor who personally funds a fact‑finding trip to Mexico and keeps supporting documents may deduct the airfare, accommodation, and reasonable meal costs as business expenses.

Bottom line: Both U.S. and Norwegian investors can write off a business‑purpose trip to Mexico, but they must meet the respective documentation, allocation, and non‑reimbursement requirements to substantiate the deduction.

Corporate Influence

The Ethics of Corporate Influence: Why We Must Guard Our Minds

By Philosophy On X Editors

In today’s hyper‑connected world, corporations and institutions wield an unprecedented arsenal of psychological tools to shape the thoughts, habits, and purchasing decisions of employees, clients, and consumers. From subtle nudges embedded in workplace design to sophisticated, data‑driven advertising algorithms, the line between legitimate persuasion and manipulative control is increasingly blurred. This raises a pressing ethical question: Should individuals and society do everything possible to resist these influences?


Why the Issue Matters

1. Autonomy and Human Dignity

Philosophers from Immanuel Kant to modern human‑rights scholars agree that respecting personal agency is a cornerstone of moral conduct. When an organization covertly steers a person’s choices without informed consent, it infringes upon that individual’s dignity and right to self‑determination.

2. Societal Consequences

Manipulative tactics deployed at scale can distort public opinion, undermine fair competition, and even sway democratic processes. When a handful of powerful entities dictate what we see, hear, and buy, the marketplace of ideas contracts, threatening the pluralism essential to a healthy society.

3. Economic Fairness

Many of these methods serve profit motives while externalizing hidden costs onto workers and consumers—losses of privacy, increased consumption, and heightened stress. This creates an asymmetrical relationship where corporations reap outsized gains at the expense of the very people they claim to serve.


Nuanced Perspectives

PerspectiveCore Argument
Legitimate PersuasionNot all influence is unethical. Transparent marketing, leadership coaching, and public‑health campaigns can align interests and benefit both parties.
Individual ResponsibilityCritics contend that individuals should cultivate critical thinking to defend themselves. While valuable, this places an unfair burden on people lacking time, resources, or education.
Regulatory SafeguardsExisting laws already prohibit deceptive advertising and non‑consensual data harvesting, yet enforcement often lags behind rapid technological advances.

Practical Steps for Individuals

  1. Educate Yourself and Others
    • Familiarize yourself with classic persuasion techniques—scarcity, social proof, framing, and anchoring.
    • Share bite‑size explanations with coworkers, friends, and family to build a collective awareness.
  2. Demand Transparency
    • Question why a particular ad appears or why a platform recommends specific content.
    • Deploy privacy‑enhancing tools—ad blockers, tracker blockers, and privacy‑focused browsers—that surface hidden targeting mechanisms.
  3. Support Ethical Brands
    • Favor companies that openly disclose data‑use policies, employee‑well‑being initiatives, and marketing practices.
    • Look for certifications such as B‑Corp, Fair Trade, or recognized privacy seals.
  4. Engage in Collective Advocacy
    • Join NGOs lobbying for stronger consumer‑protection, data‑privacy, and workplace‑rights legislation.
    • Participate in public comment periods for emerging regulations (e.g., GDPR‑style reforms, AI‑ethics guidelines).
  5. Set Personal Boundaries
    • Limit time on platforms that rely heavily on infinite‑scroll designs or algorithmic recommendation loops.
    • Schedule regular “digital fasts” to reduce exposure to continuous nudges.
  6. Utilize Internal Channels
    • If you encounter questionable practices at work, raise the issue through HR, ethics hotlines, or employee resource groups.
    • Document specifics—dates, messages, outcomes—to strengthen your case.

A Balanced Outlook

Resisting manipulative corporate influence is not a solo endeavor. Effective resistance blends personal vigilance with systemic change. Individual actions raise awareness and generate demand for higher standards; collective pressure pushes regulators and corporations toward accountability. Together, these forces reshape the environment in which covert mind‑control tactics could otherwise flourish.


Looking Ahead

As technology continues to evolve—especially with AI‑generated content and hyper‑personalized recommendation engines—the ethical stakes will rise. Ongoing public dialogue, robust education, and proactive policy development will be essential to safeguard autonomy and preserve a marketplace of ideas that truly reflects diverse human values.

If you’d like to explore concrete case studies of corporate nudging, learn how to build a community media‑literacy program, or discuss the latest regulatory proposals, feel free to let me know.

Ingresos OnlyFans

Desglose de ingresos de OnlyFans en 2024


Volumen bruto de pagos/transacciones: 7220 millones de dólares. Este es el dinero total gastado por los fans en la plataforma (suscripciones, propinas, contenido de pago por visión, etc.), que se destina principalmente a los creadores (80 % de participación, o aproximadamente 5780 millones de dólares). Es una métrica clave para la escala de OnlyFans, pero representa el ecosistema completo, no solo las ganancias de la empresa.
Ingresos netos (reparto de la empresa): 1410 millones de dólares. Esto es lo que OnlyFans se queda después de pagar a los creadores (20 % de comisión). Después de gastos, el beneficio antes de impuestos fue de 684 millones de dólares.


Contexto de crecimiento: Aumento del 9 % con respecto a 2023, con 4,12 millones de creadores y 305 millones de fans. Si bien la pornografía aún genera entre el 70 % y el 80 % de la actividad, los creadores de contenido no adulto (p. ej., fitness, comedia) están creciendo.


La cifra de “4000 millones de dólares” podría provenir de estimaciones anteriores (p. ej., los ingresos netos de 2021-2022 rondaron los 2000-3000 millones de dólares) o de una confusión con los pagos a los creadores, pero está desactualizada para 2024.


Comparación con la industria para adultos de EE. UU. (estimación de 8000-12000 millones de dólares)
Mi anterior estimación de 8000-12000 millones de dólares para EE. UU. era conservadora y se centraba en el contenido legal para adultos (vídeos, cámaras web, clips). Sin embargo, datos recientes muestran que el mercado completo de entretenimiento para adultos de EE. UU. y Norteamérica (incluyendo streaming en línea, juguetes, eventos en directo, etc.) es mucho mayor:


Estimaciones para 2024: entre 23 000 y 25 000 millones de dólares para Norteamérica (EE. UU. domina entre el 90 % y el 95 % de esta cifra). Esto coincide con las cifras globales de 58 000 a 70 000 millones de dólares, donde EE. UU. posee una participación del 35 % al 40 %.


Solo en el segmento online: 76 000 millones de dólares a nivel mundial en 2024, con EE. UU. entre 25 000 y 30 000 millones de dólares (impulsado por las suscripciones y el streaming).
Los 7220 millones de dólares brutos (o 1410 millones de dólares netos) de OnlyFans representan:
Aproximadamente el 20 %-30 % de los ingresos online para adultos en EE. UU. (una parte considerable, ya que ha transferido el poder de los estudios a los creadores individuales).


Aproximadamente el 5 %-10 % del mercado total para adultos en EE. UU. (una porción menor al incluir el mercado offline, como clubes de striptease y juguetes, estimado en 10 000 a 15 000 millones de dólares en total). En resumen: OnlyFans es una potencia (sus ingresos brutos casi igualan mi antigua estimación total en EE. UU.), pero el sector supera los 8-12 mil millones de dólares al ampliar su alcance. Las proyecciones muestran un crecimiento en EE. UU. de más de 30 mil millones de dólares para 2030 mediante la personalización con RV/IA.

OnlyFans 2024 年收入明细

总支付额/交易额:72.2 亿美元。这是粉丝在平台上的总支出(订阅、打赏、付费观看内容等),其中大部分流向创作者(占 80%,约 57.8 亿美元)。这是衡量 OnlyFans 规模的关键指标,但它代表的是整个生态系统,而不仅仅是公司的“收益”。

净收入(公司分成):14.1 亿美元。这是 OnlyFans 在支付创作者佣金(20%)后保留的金额。扣除各项支出后,税前利润为 6.84 亿美元。

增长情况:较 2023 年增长 9%,拥有 412 万创作者和 3.05 亿粉丝。 虽然色情内容仍然占据约 70-80% 的市场份额,但非成人内容创作者(例如健身、喜剧)正在增长。4af7e5d4231b

“40 亿美元”这个数字可能源于较早的估算(例如,2021-2022 年的净收入约为 20-30 亿美元)或与创作者收入分成相关的混淆,但对于 2024 年而言,这个数字已经过时了。

与美国成人产业(预计 80-120 亿美元)的比较

我之前对美国成人产业 80-120 亿美元的估算较为保守,主要集中在核心合法成人内容(视频、网络摄像头、短片)上——但最新数据显示,整个美国/北美成人娱乐市场(包括在线流媒体、情趣用品、现场活动等)的规模要大得多:

2024 年预计:北美市场规模约为 230-250 亿美元(美国占据其中约 90-95%)。 这与全球580亿至700亿美元的数字相符,其中美国占据35%至40%的份额。

仅线上部分:预计2024年全球市场规模将达到760亿美元,其中美国约为250亿至300亿美元(主要由订阅和流媒体驱动)。

OnlyFans 的总收入为72.2亿美元(净收入为14.1亿美元),占比:

美国线上成人内容收入的约20%至30%(占比巨大,因为它将权力从工作室转移到了个人创作者手中)。

美国成人市场总收入的约5%至10%(如果包括脱衣舞俱乐部/情趣用品等线下渠道,则占比更小,线下渠道总规模估计为100亿至150亿美元)。

简而言之:OnlyFans 实力雄厚(其总收入几乎与我之前对美国整体规模的估计持平),但如果将范围扩大,整个行业的蛋糕远不止 80-120 亿美元。预计到 2030 年,借助 VR/AI 个性化技术,美国市场规模将增长至 300 亿美元以上。

OnlyFans 2024 年收入明细

总支付额/交易额:72.2 亿美元。这是粉丝在平台上的总支出(订阅、打赏、付费观看内容等),其中大部分流向创作者(占 80%,约 57.8 亿美元)。这是衡量 OnlyFans 规模的关键指标,但它代表的是整个生态系统,而不仅仅是公司的“收益”。

净收入(公司分成):14.1 亿美元。这是 OnlyFans 在支付创作者佣金(20%)后保留的金额。扣除各项支出后,税前利润为 6.84 亿美元。

增长情况:较 2023 年增长 9%,拥有 412 万创作者和 3.05 亿粉丝。 虽然色情内容仍然占据约 70-80% 的市场份额,但非成人内容创作者(例如健身、喜剧)正在增长。

“40 亿美元”这个数字可能源于较早的估算(例如,2021-2022 年的净收入约为 20-30 亿美元)或与创作者收入分成相关的混淆,但对于 2024 年而言,这个数字已经过时了。

与美国成人产业(预计 80-120 亿美元)的比较

我之前对美国成人产业 80-120 亿美元的估算较为保守,主要集中在核心合法成人内容(视频、网络摄像头、短片)上——但最新数据显示,整个美国/北美成人娱乐市场(包括在线流媒体、情趣用品、现场活动等)的规模要大得多:

2024 年预计:北美市场规模约为 230-250 亿美元(美国占据其中约 90-95%)。 这与全球580亿至700亿美元的数字相符,其中美国占据35%至40%的份额。

仅线上部分:预计2024年全球市场规模将达到760亿美元,其中美国约为250亿至300亿美元(主要由订阅和流媒体驱动)。38b6f0

OnlyFans 的总收入为72.2亿美元(净收入为14.1亿美元),占比:

美国线上成人内容收入的约20%至30%(占比巨大,因为它将权力从工作室转移到了个人创作者手中)。

美国成人市场总收入的约5%至10%(如果包括脱衣舞俱乐部/情趣用品等线下渠道,则占比更小,线下渠道总规模估计为100亿至150亿美元)。

简而言之:OnlyFans 实力雄厚(其总收入几乎与我之前对美国整体规模的估计持平),但如果将范围扩大,整个行业的蛋糕远不止 80-120 亿美元。预计到 2030 年,借助 VR/AI 个性化技术,美国市场规模将增长至 300 亿美元以上。