Pushing the Boundaries of AI: Google Gemini 2.5 Pro and OpenAI o1 in Reasoning, Coding, and Multimodal Tasks
Explore how Google Gemini 2.5 Pro and OpenAI o1 are pushing AI boundaries in reasoning, coding, and multimodal tasks. A quick look at their strengths, differences, and real-world impact
AI INSIGHT
Rice AI (Ratna)
5/21/202516 min baca


The landscape of artificial intelligence is rapidly evolving. In late 2024 and early 2025, two technology leaders – Google DeepMind and OpenAI – unveiled next-generation models designed to tackle some of the hardest problems in AI: Google’s Gemini 2.5 Pro and OpenAI’s o1. These “reasoning models” promise to revolutionize what machines can do by thinking through complex tasks before answering. This article examines how these models push the envelope in advanced reasoning, code generation, and multimodal intelligence, drawing on the latest technical reports and benchmark studies. We analyze their capabilities on key tasks, compare their design philosophies, and consider what the future holds for organizations that adopt such powerful AI.
The New Era of AI Reasoning Models
Traditionally, large language models (LLMs) like GPT-3 or GPT-4 have excelled at generating fluent text but often lacked multistep reasoning. In response, AI developers have introduced a new class of “reasoning” models that intentionally spend extra compute and time to “think” through problems (TechRepublic, 2025; OpenAI, 2024). In practice, these models use techniques like chain-of-thought prompting and additional fine-tuning to break problems into steps, verify partial answers, and reduce mistakes. As one industry analyst notes, reasoning AIs “evaluate context, process details methodically, and fact-check responses to ensure logical accuracy” – albeit at higher computational cost. This design shift marks a break from earlier LLMs that answered as quickly as possible, and it can yield more reliable solutions for complex queries (Kerner, 2024; Jackson, 2025).
OpenAI triggered this trend in September 2024 by introducing o1 (code-named “Strawberry”), a family of LLMs trained to reflect on queries before answering. Google followed in March 2025 with Gemini 2.5 Pro, its first experimental “thinking model” built into the Gemini family. According to Google, Gemini 2.5 Pro is “our most intelligent AI model,” engineered to reason through its thoughts rather than simply respond. Likewise, OpenAI describes o1 models as able to “spend more time thinking before they respond,” which enables them to solve harder problems in science, coding, and math.
Both companies emphasize that these models excel on hard benchmarks that strain ordinary LLMs. For example, Google reports Gemini 2.5 Pro leads on math and science tests like the American Invitational Mathematics Examination (AIME) and the Graduate-level Physics/Science Queries (GPQA) benchmark. OpenAI likewise highlights o1’s prowess: in an internal evaluation, the new reasoning model solved 83% of International Math Olympiad problems, compared to only 13% for the previous GPT-4 baseline. These leaps in problem-solving ability have prompted competitors (Anthropic, xAI, DeepSeek, etc.) to develop similar models, signaling an industry-wide shift toward reasoning AI.
OpenAI o1: A New Paradigm in Complex Reasoning
OpenAI’s o1 is a family of generative transformer models optimized for advanced reasoning. Its development history reflects a deliberate move beyond GPT-4’s approach. After internal tests showed “promising results on mathematical benchmarks,” OpenAI released o1-preview in September 2024. The full o1 model followed in December 2024 along with a higher-tier o1 pro mode on the new ChatGPT Pro subscription.
The core innovation of o1 is its training process: models are trained to reason step by step, akin to a person solving a problem. Technically, this often involves chain-of-thought learning and reinforcement learning to encourage the model to “think about the right approach” before answering. As one explanation notes, unlike previous models o1 “spends more time processing information before responding” and tackles hard problems with multistep reasoning strategies. In practice, this means o1 may generate intermediate reasoning or just take longer to produce a polished answer. OpenAI’s founders suggest this works: in early tests the reasoning model performed at the level of a PhD student on difficult science benchmarks, and “excels in math and coding”.
OpenAI reports substantial gains for o1 on specialized tasks. In coding competitions, for instance, the original o1-preview model scored at the 89th percentile on Codeforces contests – far above GPT-4’s performance. On mathematical puzzles, o1 solved 83% of problems that GPT-4 answered only 13% of, demonstrating a dramatic improvement. To further boost reliability, OpenAI later introduced o1 pro mode (via ChatGPT Pro) which uses even more compute. According to OpenAI, o1 pro mode reduces errors on tough questions by up to 34% and performs better than standard o1 on benchmarks in math, science, and coding. For example, on the 2025 AIME exam o1-pro achieved an 86% pass rate versus 78% for ordinary o1. OpenAI emphasizes that these higher-tier models yield “more reliably accurate and comprehensive responses,” especially on technical domains like data science and programming.
Importantly, OpenAI positions o1 as a complement to its other models rather than a one-size-fits-all replacement. In their announcement, OpenAI notes that for many routine questions, existing models (like GPT-4 with browsing) may still be faster and more capable than o1. The reasoning models are intended for the hardest tasks where extra thinking power is beneficial. For developers, OpenAI also released o1-mini, a smaller, faster version optimized for code. This variant “does particularly well at coding tasks, making it a good choice for developers who need quick, reliable responses”. In short, OpenAI’s strategy is to integrate reasoning where needed (through o1) while retaining its flexible GPT ecosystem for other use cases.
Google Gemini 2.5 Pro: “Thinking” at Scale
Google’s response to the reasoning paradigm is Gemini 2.5 Pro Experimental, the first in a new 2.5 series of models. Introduced in March 2025, Gemini 2.5 Pro is described by Google DeepMind as a “thinking model” that pauses to analyze a problem before answering. According to CTO Koray Kavukcuoglu, Gemini 2.5 is “our most intelligent AI model” and tops multiple benchmarks by a significant margin. Unlike the flashier code name, the core idea is straightforward: Gemini 2.5 builds reasoning into the model’s architecture and training so that every query benefits from extra scrutiny.
A key differentiator for Gemini is its natively multimodal foundation. From the start, Google designed Gemini to handle text, code, images, audio, and video all in one model. Gemini 2.5 Pro continues this trend: it can, for example, understand an image or video context while generating text or code. This multimodal ability is integrated directly, not through chaining separate models. As Google notes, 2.5 Pro “comprehends vast datasets and handles complex problems from different information sources, including text, audio, images, video and even entire code repositories”.
In practical terms, Gemini 2.5 Pro delivers unprecedented scale. It ships with a 1 million token context window – roughly 750,000 words – allowing it to process entire books or large data dumps in one prompt. Google is already planning to double this to 2 million tokens. This means a user could ask Gemini 2.5 Pro to analyze a complete report, long-form codebase, or multi-page document in a single go, far beyond the 128K (or at most 256K) token limits of earlier models. Such capability could transform tasks like summarizing legal cases, generating novel-length content, or writing complex software.
According to Google’s published data, Gemini 2.5 Pro excels in reasoning and coding benchmarks. In testing with no extra trickery (like voting ensembles), it leads in math and science exams: for example, it achieves state-of-the-art scores on the Graduate Physics/Science Questions (GPQA) Diamond and the 2025 AIME math test. It also scored 18.8% on Humanity’s Last Exam, a broad multimodal test of thousands of math, science, and humanities questions – outperforming most rival models on that benchmark. For coding, Google reports a “big leap” over Gemini 2.0: on the SWE-Bench Verified (an agentic code challenge), 2.5 Pro scored 63.8%. That matches independent reports: in one analysis Gemini 2.5 Pro reached 68.6% on the Aider Polyglot code-editing benchmark and 63.8% on SWE-Bench – strong results but just behind Anthropic’s Claude 3.7 on the latter.
Qualitative examples underline the capabilities. Google highlights that Gemini 2.5 Pro can generate visually compelling web apps or even complete games from a short prompt. In one demo, the model took a single line prompt and produced fully functional code for an endless runner video game, using its reasoning to handle game logic and assets (Google Developers Blog, 2025). Similarly, Google shows Gemini building interactive simulations (fractal graphics, animations, bubble charts) by “thinking through” code generation and execution step by step. These agentic coding skills (having the AI design and run code to achieve a goal) illustrate how Gemini is pushing beyond static text generation into dynamic content creation.
Overall, Gemini 2.5 Pro is offered as an “experimental” release for developers and subscribers. It is accessible via Google’s AI Studio and the Gemini Advanced subscription, with broader enterprise availability (Vertex AI) planned soon. Google emphasizes that these reasoning capabilities will be integrated natively across its models going forward, effectively retiring the separate “thinking” label. In short, Gemini 2.5 Pro embodies Google’s bet: that combining deep reasoning with massive context and multimodality will unlock a leap in AI utility.
Benchmarking Reasoning and Problem-Solving
What do independent and Google-supplied tests tell us about these models’ actual performance? The picture is one of generally very high scores on hard tasks, but also some nuances in who leads where. Google’s data and industry tests confirm that Gemini 2.5 Pro is among the best at academic reasoning. For example, it scored 86.7% on the 2025 AIME math exam and 84.0% on the GPQA science benchmark. On the broad Humanity’s Last Exam, it leads with 18.8%, indicating it answers nearly one in five of those tough questions correctly. Notably, Google obtained these results without expensive majority-voting techniques, underscoring the model’s raw capability.
OpenAI’s o1 models post comparably impressive numbers. Although OpenAI has not publicized an AIME score for o1, they report solving 83% of International Math Olympiad problems in one test. In OpenAI’s internal AIME evaluation, o1 pro mode hit 86% (compared to 78% for standard o1). These figures put o1 in the same ballpark as Gemini on high-level math tasks. Likewise, both companies show success on science questions: OpenAI said o1 performs like a PhD student on advanced science problems, and Gemini’s 84.0% GPQA score is at human exam level.
However, when it comes to coding tasks, the results diverge more. Google’s own tables show Gemini 2.5 Pro scoring 63.2–63.8% on the SWE-Bench coding exam (depending on the exact test setup) and 72.7–76.5% on the Aider Polyglot code-editing test. In independent benchmarking, Gemini 2.5 Pro achieved 68.6% on Aider Polyglot but only 63.8% on SWE-Bench Verified. OpenAI’s models, by contrast, slightly outperform Gemini on these metrics: for Aider Polyglot they scored around 81% (whole code), and o1 pro mode hit 69.1% on SWE-Bench (versus Gemini’s 63.2%). In practical terms, this suggests that while Gemini is extremely strong at creating and transforming code (especially for web apps), OpenAI’s reasoning stack is a bit more accurate on formal code editing benchmarks.
Both sides frame these differences in context. Google notes that despite trailing on one software test (SWE-Bench Verified), Gemini “excels at creating visually compelling web apps and agentic code applications” and can even generate a complete video game from a brief prompt. OpenAI emphasizes the reliability boost of its o1 pro mode: in one evaluation it required four attempts to confirm an answer, with o1 pro achieving 4/4 correct far more often than GPT-4 (e.g., 80% reliability on AIME vs 37% for GPT-4). In short, both Google and OpenAI claim the upper hand in coding, but external tests show the gap is small and task-dependent.
On multimodal benchmarks, Gemini’s strengths shine. The model led in Google’s internal tests on image and video understanding: for instance, it scored 79.6% on a visual reasoning benchmark (MMMU) and 84.8% on a video captioning task. OpenAI’s o1 was not explicitly tested on these in public reports. (Note: OpenAI’s later o3 model tackles visual reasoning tasks with chain-of-thought, but o1 itself is primarily a text/coding model.) Overall, Google’s published results suggest Gemini 2.5 Pro sets a new state-of-the-art on a variety of academic benchmarks, broadly edging out contemporaries. But on specific coding benchmarks like SWE-Bench, it still trails the top performer (Anthropic’s Claude Sonnet) by a few points. This mixed outcome underscores a balanced view: Gemini 2.5 Pro excels in math and multimodal reasoning, whereas OpenAI’s o1 series has equally impressive reasoning prowess and holds an edge on certain code-centric metrics.
Advancing Code Generation and Developer Productivity
Both companies highlight coding as a major use case for their models. OpenAI points out that o1 models were tested on programming contests: the preview reached the 89th percentile in Codeforces coding competitions. In practice, this means in raw code-writing ability, o1 is on par with an average competitive coder. The o1-mini model is specifically tuned for code: it provides faster responses with nearly the same accuracy, making it ideal for in-development support. OpenAI also touts o1’s ability to write and debug code; for example, it could generate working code from complex instructions or explain the cause of a program bug given a screenshot.
Google’s focus is on agentic code generation: using Gemini not just to write a snippet, but to build full applications autonomously. Gemini 2.5 Pro has been demonstrated creating complete interactive projects from brief prompts – a “video game from a single line prompt,” or rich JavaScript animations (DeepMind blog). It also performs strongly on targeted code-evaluation benchmarks. On SWE-Bench Verified, which tests an agent’s ability to solve programming tasks iteratively, Gemini 2.5 Pro scored 63.8% (Gemini’s report). On Aider Polyglot, which measures code editing accuracy, Gemini achieved 68.6% – again a leading result (though slightly below OpenAI’s 81%). Google suggests this means Gemini is particularly good at complex, multi-file projects and agentic tasks, rather than line-by-line editing.
To illustrate, consider a developer asking the AI to implement a specified algorithm or design a web page. Gemini 2.5 Pro can integrate code, images, and even data from audio or video to produce end-to-end solutions. OpenAI’s o1, meanwhile, is a powerful assistant for writing and reasoning about code in a largely text-based environment. Both models can generate new code or refactor existing code. The practical difference may come down to deployment: Gemini is accessible via Google’s Cloud and Gemini app, whereas o1 is available through ChatGPT Plus/Pro and the OpenAI API. In either case, technical teams will need to craft careful prompts and integration tools, and to validate outputs (these models can still make errors). But the trend is clear: advanced LLMs are increasingly capable coders and could transform software development workflows.
Pioneering Multimodal Intelligence
A defining feature of Gemini 2.5 Pro is its native multimodal capability. Even before Gemini 2.5, the Gemini family was built to handle text, code, images, audio, and video in one model (unlike GPT-4 which adds vision via a separate system). Gemini 2.5 Pro ships with this multimodal ability intact. As Google’s blog explains, the model can “comprehend vast datasets” across modalities. Concretely, a user can feed Gemini 2.5 Pro a prompt that includes an image, a block of text, and code, and the model will reason about all of it together. For example, a marketer could upload product photos along with a spreadsheet of sales data and ask Gemini to create a presentation; the model would reason over both visual and tabular information.
OpenAI is also pushing into vision-language reasoning, though it has taken a slightly different approach. The original o1 model (and GPT-4o) already accepts images and can caption or answer questions about them. But the April 2025 “Thinking with images” update shows OpenAI extending chain-of-thought reasoning within images. In this architecture, GPT-4o and its successors (o3, etc.) can manipulate the image (zoom, rotate) as part of solving a problem. The blog shows examples like analyzing a photo of handwritten math problems or a messy engineering diagram. In essence, OpenAI is enabling its models to reason about visual content the way they reason about text – an important multimodal advance. This parallels Gemini’s vision capabilities, though OpenAI’s current chain-of-thought image reasoning is demonstrated on the GPT-4o series rather than on the original o1 model.
The long context window also plays into multimodality. With 1M tokens, Gemini 2.5 Pro can take in entire books of text, full audio transcripts, or video as data. OpenAI’s o1 and GPT-4o currently handle much shorter inputs (tens of thousands of tokens). In the future, we expect all such models to extend context significantly. The upshot for organizations is that tasks once considered impossible for AI – like reviewing an entire video conference transcript and taking action, or designing an app with multi-format inputs – are becoming feasible. Early users (tech companies and research labs) are already exploring these scenarios.
Of course, multimodal models also face challenges. They require careful input formatting, and even sophisticated models can misinterpret images or produce visually plausible but inaccurate outputs. There is also the question of data privacy and bias when feeding proprietary images and audio into third-party AI. In a professional setting, organizations must ensure security and accuracy (for instance, by combining these models with domain-specific tools or verification). The potential is high – experts believe multimodal reasoning will be key to building AI agents that can interact with the world – but the technology is still maturing (Zeff, 2025).
Implications for Industry and Transformation
What do these advances mean for businesses and organizations? In many ways, they amplify earlier promises of AI by removing previous limitations. Complex analytical tasks that stumped AI before – for example, solving a novel engineering problem, writing significant code, or interpreting a mix of text and images – are now within reach. This can accelerate innovation in fields like software engineering, data analytics, scientific research, and content creation. For instance, a finance team might use a reasoning model to analyze patterns across hundreds of market reports and news images, generating actionable insights. An R&D group could ask the AI to propose a new algorithm given technical requirements. In software development, programmers can treat the model as an expert collaborator, writing boilerplate code or debugging complex logic at scale.
Consulting firms and enterprise planners are taking note. As TechCrunch observes, reasoning models are expected to be central to AI agents – systems that can perform tasks autonomously with minimal human oversight. This raises the possibility of automated agents that, for example, handle routine programming fixes, triage customer queries with deep understanding, or even run controlled simulations in physical environments. The very concept of digital transformation accelerates: tasks that once required specialized skills might be automated or augmented by AI, changing workforce needs.
However, the hype must be tempered with pragmatic assessment. These models are not magic wands. They require significant compute resources (making them expensive to run at scale) and specialized engineering to deploy safely. TechCrunch cautions that reasoning models “are also more expensive” to operate due to the extra computation. Early users report that responses can be slower (OpenAI’s interface shows progress bars for o1 pro mode) and that the models sometimes fail to generalize beyond their training. Organizations will need to evaluate trade-offs: when do the benefits of better reasoning justify the cost? They must also monitor for errors; even advanced models can hallucinate or rely on outdated information.
The balanced perspective is clear: as powerful as Gemini 2.5 Pro and o1 are, they complement – not replace – existing systems. For example, OpenAI itself notes that “for many common cases GPT-4o will be more capable in the near term”. Similarly, Google still offers GPT-4o and other models for tasks that need web knowledge or file uploads that Gemini 2.5 (experimental) does not yet support. In practice, organizations may deploy a mix of models: using reasoning AIs for heavy analytical tasks and faster models for day-to-day queries.
Challenges and Considerations
Several practical considerations accompany these new models:
Cost and Infrastructure: Running Gemini 2.5 Pro or o1 pro requires high-performance cloud infrastructure or subscriptions (Gemini Advanced at $20/month; ChatGPT Pro at $200/month per user). For enterprise scale, costs can mount quickly. Companies must budget for compute and possibly negotiate custom arrangements. Consulting firms can assist by estimating ROI: which tasks truly need these models’ power, and which can be handled by cheaper alternatives.
Latency and User Experience: These models trade speed for depth. As OpenAI notes, o1 pro mode answers take longer (a few seconds more, with progress bars). In user-facing applications, this latency must be managed (for instance, by asynchronous tasks or caching). Teams should design interfaces that set expectations, as sluggish performance can hinder adoption.
Reliability and Verification: Despite better accuracy, reasoning models still err. Organizations should build verification layers (e.g., checking generated code against unit tests). For sensitive domains (legal, medical, financial), outputs must be reviewed by experts. Consulting services can help set up these guardrails and interpret model “reasoning” outputs correctly.
Integration and Data Privacy: Incorporating these models into existing workflows requires integration work. Data pipelines, API calls, and compliance (especially for on-prem vs cloud) need planning. Moreover, feeding internal documents or proprietary code into an external AI risks data leakage. Firms must consider whether to use cloud services or wait for on-prem deployment options. Experts (like our consultants) can guide architecture choices and ensure proper data governance.
Ethics and Bias: New capabilities bring new risks. If a model “thinks” its way to a biased conclusion, it may do so more convincingly. Companies must audit outputs for fairness and compliance with ethical guidelines. Transparency features like chain-of-thought can help here by revealing the model’s reasoning steps, but this also raises how to securely handle potentially private reasoning content.
In short, while Gemini 2.5 Pro and o1 offer unprecedented power, integrating them is nontrivial. A consulting partner can help organizations navigate these challenges: from piloting use cases, to retraining staff on AI-augmented workflows, to establishing monitoring processes. The goal is to leverage the models’ strengths (improved accuracy, new task types) while mitigating their limitations.
Looking Ahead
What does the future hold as reasoning models mature? Both Google and OpenAI hint at rapid iteration. Google has already said all future Gemini models will bake in reasoning by default. OpenAI is developing o-series successors (e.g., o3, o4-mini) that extend chain-of-thought to images and audio. We expect context windows to keep growing beyond 1–2 million tokens, eventually enabling true long-term memory in AI agents. Training techniques will also evolve (for example, Stanford researchers are exploring better ways for LLMs to learn reasoning skills, though those results are still emerging).
A likely trend is hybrid systems: mixing large reasoning models with specialized tools. For example, Gemini 2.5 Pro can already invoke tools (like image editors or search) during reasoning. Future versions may seamlessly incorporate external knowledge bases or analytics platforms. Consulting firms and data experts will play a key role in building these integrations, ensuring the AI can query company databases or APIs securely.
Another future direction is user adaptation. As these models become more capable, developers and analysts will learn to write better prompts and workflows. A feedback loop may emerge where enterprise use refines the AI. Our own consulting experience suggests organizations that partner early with AI vendors and invest in training see far higher gains. In that sense, the “call to action” is already beginning: businesses should start experimenting now, even as the models improve.
Finally, competitive dynamics will accelerate progress. Already, Anthropic, Microsoft (with ChatGPT), and startups are pushing similar capabilities. IBM and others are betting on industry-specific applications (e.g., code for cybersecurity). The next 12–18 months may see several new “reasoning” models for specialized domains (medicine, law, engineering), potentially licensed to enterprises. Firms should keep an eye on these developments and have a flexible AI strategy.
Conclusion: Embracing Next-Gen AI Models
Google’s Gemini 2.5 Pro and OpenAI’s o1 represent a significant leap in what AI systems can do. By integrating sophisticated reasoning, extended context, and true multimodality, they open up problem-solving and productivity scenarios that were science fiction a few years ago. Benchmarks show they outperform earlier models by wide margins on math, science, and coding tasks. Yet they also introduce new complexities – higher cost, slower responses, and the need for careful validation. Organizations that wish to stay on the cutting edge must balance these factors.
For companies considering adoption, expert guidance will be crucial. A consulting firm specializing in AI and data transformation can help bridge the gap between raw model capability and business impact. Consultants can assess where reasoning models add value (e.g., accelerating research, automating code reviews, enhancing analytics), design safe and efficient deployment plans, and train teams on how to “ask” these models effectively. They can also advise on change management, ensuring the new tools fit within existing processes and compliance frameworks.
In the end, Gemini 2.5 Pro and o1 are tools – extremely powerful ones – and their benefit will depend on how skillfully they are used. The technology is advancing at breakneck speed; the first users today are shaping its evolution. For organizations aiming to leverage these innovations, proactive experimentation is key. By partnering with experienced AI consultants, companies can avoid pitfalls and capitalize on the new capabilities, gaining a competitive edge in the era of advanced reasoning AI.
References
Jackson, F. (2025, March 26). Google’s Gemini 2.5 Pro is better at coding, math & science than your favorite AI model. TechRepublic. Retrieved from https://www.techrepublic.com/article/news-google-gemini-2-5-pro/
Kavukcuoglu, K. (2025, March 26). Gemini 2.5: Our newest Gemini model with thinking. Google Blog. Retrieved from https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/
Kerner, S. M. (2024, December 11). OpenAI o1 explained: Everything you need to know. TechTarget. Retrieved from https://www.techtarget.com/whatis/feature/OpenAI-o1-explained-Everything-you-need-to-know
OpenAI. (2024, September 12). Introducing OpenAI o1-preview. OpenAI Blog. Retrieved from https://openai.com/index/introducing-openai-o1-preview/
OpenAI. (2024, September 12). Scott Wu: OpenAI o1 & Coding. OpenAI Blog. Retrieved from https://openai.com/index/o1-coding/
OpenAI. (2024, December 5). Introducing ChatGPT Pro. OpenAI Blog. Retrieved from https://openai.com/index/introducing-chatgpt-pro/
OpenAI. (2025, April 16). Thinking with images. OpenAI Blog. Retrieved from https://openai.com/index/thinking-with-images/
Zeff, M. (2025, March 25). Google unveils a next-gen family of AI reasoning models. TechCrunch. Retrieved from https://techcrunch.com/2025/03/25/google-unveils-a-next-gen-ai-reasoning-model/
#AI #Gemini #OpenAI #Tech #FutureOfWork #TechInnovation #DeepLearning #FutureTech
RICE AI Consultant
Menjadi mitra paling tepercaya dalam transformasi digital dan inovasi AI, yang membantu organisasi untuk bertumbuh secara berkelanjutan dan menciptakan masa depan yang lebih baik.
Hubungi kami
Email: consultant@riceai.net
+62 822-2154-2090 (Marketing)
© 2025. All rights reserved.


+62 851-1748-1134 (Office)
IG: @rice.aiconsulting