Abstract
Communication is one of the most fundamental functions of business, government, and society at large. Lately, the convergence of data, voice, and multimedia over digital networks coupled with the continuous improvement in network capacity and reliability has enabled a wide range of communication applications such as VoiceIP technology, voice, video, and multimedia conferencing. Today's communication tools are developed on an ad-hoc basis with limited separation between application needs and logic, device types, and underlying networks. These complex dependencies on one another result in high costs and lengthy development cycles. Such vertically developed systems typically have fixed functionality and interfaces that do not interoperate with each other due to differences in design, architecture, API, and network/device assumptions. It is difficult to adapt the systems to fit the evolution in user needs, underlying network dynamics, and related hardware technology. Users, particularly sophisticated domain specific users, are forced to hop between tools to satisfy their communication needs. Also, a fragmented development approach poses major challenges in integration and in providing integrated communication solutions. Lastly, it hinders the development of new communication tools, particularly for domain specific applications (e.g., telemedicine), because of the complexity, cost, and lengthy cycle required of vertical development.The invention of the Communication Virtual Machine (CVM) is a software technology, which includes a new concept, process, and design for conceiving, synthesizing, and delivering digital communication solutions across application domains. In addition, CVM provides a new means of rich multimedia information exchange. This model-driven process can deliver tailor-made applications to fit users’ communication needs: 1) a domain expert elicits communication requirements; 2) the expert defines the needs as a model in CVM as a communication schema; 3) end users have the option to load and further modify the schema to satisfy their needs; and 4) communication is ready to begin. With this fully-automated, model-driven process, a sophisticated communication model can be built in terms of hours or days, rather than months or years needed for designing and implementing a major communication application. CVM technology eliminates the need for system development in order to fulfill the need of a new communication services or applications – it dramatically reduces the cost and time (from concept to market or time to user). It also provides superior advantage of platform flexibility and adaptivity.Benefit
Cost-effective & time-efficient communication software developmentFaster development, ease of use, reusable, and reliableSynchronous & Asynchronous communication modesNetwork and Device IndependentPlug-n-Play & Internet deployableMarket Application
An immediate application of CVM is in telemedicine by supporting healthcare communication and information exchange. Other applications include disaster management, defense communication, banking & financing, and any other industry sector that requires sophisticated communication needs. CVM has a promising application as a general communication tool for enterprises such as universities and middle-to-large-sized companies.
Information Technology
Abstract
Distributed computing architectures widely use
Cloud service datacenters to meet the performance of servers as many computing
applications are adopting a “cloud serviceâ€쳌 computing model. Typical examples
of such architectures include physical or logical separation of the computation
domain and the storage domain or the usage of distributed replicated storage,
where a host server replicates its data to remote servers. With these new
architectures, many high-performance servers in the datacenter either locally
cache, or store a copy of the data in a distributed storage system or, in the
case of a centralized storage system, locally cache the remotely-stored
persistent data. Such caching is performed in order to improve data access
performance. When an application using the cloud service needs to update the
data being used from a local cache or store, challenges can arise due to data
transfer latencies between the layers of a distributed architecture.FIU inventors have developed techniques and
systems for enabling local independent failure domains in a host server or data
center to address the challenges using a communications protocol between the
LA-IFD and its host server. These techniques include receiving a request to
write a data segment to persistent storage; synchronously storing the data
segment in a buffered data segment at the LA-IFD and initiating an asynchronous
update of the data segment at a remote storage system; sending a write
acknowledgement indicating completion to the requestor; and, after receiving a
completion notification from the remote storage system, removing the buffered
data segment from the LA-IFD. In some cases, techniques allow a host server and
LA-IFD pair to monitor one another for failures and implement a modified
protocol in the event of unavailability.Benefit
Improves blocking performance on host serversMaintains scalability and data integrity levelsLess time spent in blocking can improve:performance for users of the application,resource management/utilization on the host serverMarket Application
Scalable datacenter environments
Abstract
The ability to cache and buffer file data within an operating
system (OS) page cache is a key performance optimization that has been the
standard for more than four decades. An OS can seamlessly fetch pages into
memory from backing storage when necessary as they are read or written to by a
process. The same key design is also implemented in networked file systems
whereby a client issues page fetches over the network to a remote filer server.
Unfortunately a drawback of this design is that processes are blocked by the OS
during the page fetch step. Recent findings have shown that the page fetch step
is avoidable in cases where a write to a page is not available in the page
cache. This advancement allows the OS to buffer the data temporarily written
elsewhere in memory and unblocks the process immediately. FIU inventors have been able to separate the page fetch policy
from the page fetch mechanism and develop non-blocking reads to pages that are
not in the file system cache if the data being referenced has been recently
written. This design and its executions address the correctness concerns for
non-blocking writes with respect to the durability, ordering, and consistency
semantics for file system operations. Performance evaluations revealed: Throughput performance improvements of up to 45.5x across workload types when non-blocking writes were usedNon-blocking writes reduced write operation latencies by as much as 65 to 79%The overhead introduced by non-blocking writes is negligible with little or no loss of performance when workloads cannot benefit from non-blocking writesIn addition this design provides a starting point for similar implementations in multiple OSs, while also opening up avenues for future developments and enhancementsBenefit
Makes page fetches asynchronous and reduce process blockingCan increase page fetch parallelismDo not compromise system correctness and application ordering semantics for data write/readsDo not compromise application and system recoveryMarket Application
Most industries that rely heavily on computers for everyday functions such as governments, corporations, academic institutions