CN113259363B - Covert communication method and device - Google Patents

Covert communication method and device Download PDF

Info

Publication number
CN113259363B
CN113259363B CN202110577507.3A CN202110577507A CN113259363B CN 113259363 B CN113259363 B CN 113259363B CN 202110577507 A CN202110577507 A CN 202110577507A CN 113259363 B CN113259363 B CN 113259363B
Authority
CN
China
Prior art keywords
sub
model parameter
target
model
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110577507.3A
Other languages
Chinese (zh)
Other versions
CN113259363A (en
Inventor
孙奕
陈性元
汪德刚
周传鑫
张东巍
黄琳娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202110577507.3A priority Critical patent/CN113259363B/en
Publication of CN113259363A publication Critical patent/CN113259363A/en
Application granted granted Critical
Publication of CN113259363B publication Critical patent/CN113259363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0407Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the identity of one or more communicating identities is hidden
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/16Obfuscation or hiding, e.g. involving white box

Abstract

The method comprises the steps of acquiring model parameters to be exchanged between a client participating in federal learning and a server, embedding covert data into the model parameters to be exchanged between the client participating in federal learning and the server to obtain target model parameters, hiding the covert data in the model parameters to be interacted, ensuring that the covert data are not easy to perceive, sending the target model parameters to the client or the server on the basis, and ensuring that effective covert communication is carried out between the client and the server while ensuring that a source model is not influenced.

Description

Covert communication method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a covert communication method and apparatus.
Background
With the advent of the big data era, data has become an important resource, and the exchange sharing and the full utilization of the data are core targets for realizing the value of big data and improving the efficiency of the big data. Therefore, the requirements for exchanging and sharing data among different domains are more urgent, however, there are data barriers which are difficult to break among different domains, and with the worry of people about the security of personal private data and the coming of national personal data privacy protection law, data is difficult to be exchanged in a direct sharing manner across domains, so that the current big data becomes a data island more and more in a certain sense, and the value of big data analysis and mining cannot be really exerted.
In order to solve the problems, federal learning is used as a new generation machine learning training method, private data of a user is protected in a mode that source data is not local and only model parameter information (such as gradient information) is interacted, data minimum transmission is achieved, and a new paradigm of data safety sharing is created.
However, in the federal learning scenario, where only model parameter information is allowed to be exchanged, how to implement covert communication becomes a problem.
Disclosure of Invention
The application provides the following technical scheme:
a covert communication method, comprising:
obtaining model parameters to be exchanged between a client and a central server in a federated learning framework;
embedding hidden data into the model parameters to be exchanged between the client and the central server to obtain target model parameters;
and sending the target model parameters to the client or the central server.
Optionally, after the sending the target model parameters to the client or the central server, the method further includes:
the concealed data is extracted from the target model parameters.
Optionally, the embedding hidden data into the model parameter to be exchanged between the client and the central server to obtain a target model parameter includes:
converting the concealed data into a bit string;
respectively obtaining a submodel parameter to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, and processing the submodel parameter to be processed based on the value of the bit to obtain a target submodel parameter;
and obtaining target model parameters based on the plurality of target sub-model parameters.
Optionally, the processing the sub-model parameter to be processed based on the value of the bit to obtain a target sub-model parameter includes:
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an odd number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an even number;
under the condition that the value of the bit is 0, if the submodel parameter to be processed is an even number, taking the submodel parameter to be processed as a target submodel parameter;
under the condition that the value of the bit is 1, if the submodel parameter to be processed is an odd number, taking the submodel parameter to be processed as a target submodel parameter;
and under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an even number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an odd number.
Optionally, the extracting the hidden data from the target model parameters includes:
extracting each target sub-model parameter from the target model parameters;
if the target sub-model parameter is an odd number, determining that a bit embedded in the target sub-model parameter is 1;
if the target sub-model parameter is an even number, determining that a bit embedded in the target sub-model parameter is 0;
and obtaining the hidden data based on the embedded bits in the target sub-model parameters.
A covert communication device, comprising:
the acquisition module is used for acquiring model parameters to be exchanged between the client and the central server in the federated learning framework;
the embedding module is used for embedding hidden data into the model parameters to be exchanged between the client and the central server to obtain target model parameters;
and the sending module is used for sending the target model parameters to the client or the central server.
Optionally, the apparatus further comprises:
and the extraction module is used for extracting the hidden data from the target model parameters.
Optionally, the embedded module is specifically configured to:
converting the concealed data into a bit string;
respectively obtaining a submodel parameter to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, and processing the submodel parameter to be processed based on the value of the bit to obtain a target submodel parameter;
and obtaining target model parameters based on the plurality of target sub-model parameters.
Optionally, the embedded module is specifically configured to:
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an odd number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an even number;
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an even number, taking the sub-model parameter to be processed as a target sub-model parameter;
under the condition that the value of the bit is 1, if the submodel parameter to be processed is an odd number, taking the submodel parameter to be processed as a target submodel parameter;
and under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an even number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an odd number.
Optionally, the extracting module is specifically configured to:
extracting each target sub-model parameter from the target model parameters;
if the target sub-model parameter is an odd number, determining that the bit embedded in the target sub-model parameter is 1;
if the target sub-model parameter is an even number, determining that a bit embedded in the target sub-model parameter is 0;
and obtaining the hidden data based on the embedded bits in the target sub-model parameters.
Compared with the prior art, the beneficial effect of this application is:
in the method, through obtaining model parameters to be exchanged between a client participating in federated learning and a server, hidden data is embedded into the model parameters to be exchanged between the client participating in federated learning and the server to obtain target model parameters, hidden data hiding and model parameters to be exchanged are achieved, on the basis, the target model parameters are sent to the client or the server, and effective hidden communication between the client and the server is guaranteed while source models are guaranteed to be unaffected.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic flow chart of a covert communication method provided in embodiment 1 of the present application;
FIG. 2 is a diagram of a federated learning architecture provided by the present application;
FIG. 3 is a flow chart of a covert communication method provided in embodiment 2 of the present application;
FIG. 4 is a flow chart of a covert communication method provided in embodiment 3 of the present application;
FIG. 5 is a schematic diagram of a neural network model provided herein;
fig. 6 is a schematic diagram of a logical structure of the covert communication device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Referring to fig. 1, a flow chart of a covert communication method provided in embodiment 1 of the present application is schematically shown, and as shown in fig. 1, the method may include, but is not limited to, the following steps:
and step S11, obtaining model parameters to be exchanged between the client and the central server in the federal learning framework.
The federated learning framework may include a client and a central server, as shown in fig. 2, with the federated learning client performing model training under the coordination of the central server, wherein: (1) the client is responsible for utilizing local data to train to obtain a local model, and then each client uploads parameter information of each local model to the central server; (2) the central server performs weighted aggregation on the model parameter information transmitted by each party according to a certain algorithm to obtain a global model, and then transmits the global model information to each client; (3) the client updates the local model after getting the global model information, and then carries out the next round of training; (4) and finally obtaining a model which approaches to the model obtained by centralized machine learning through a plurality of times of iterative training. The training mode effectively solves the problems of privacy disclosure and the like caused by training of aggregated source data by traditional machine learning.
In the process of model training by the client under the cooperation of the central server, model parameters to be exchanged between the client and the central server can be acquired. The model parameters to be interacted between the client and the central server may be: uploading local model parameters to a central server by a client; or, the central server issues the global model parameters to the client.
And step S12, embedding hidden data into the model parameters to be exchanged between the client and the central server to obtain target model parameters.
In this embodiment, the hidden data is embedded into the model parameters to be exchanged between the client and the central server, so that the hidden data is hidden in the model parameters to be exchanged, and the target model parameters are obtained.
The covert data may include, but is not limited to: privacy data.
And step S13, sending the target model parameters to the client or the central server.
When the model parameter to be exchanged between the client and the central server is a local model parameter uploaded to the central server by the client, sending the target model parameter to the client or the central server may be understood as: and sending the target model parameters to the central server.
In a case where the model parameter to be exchanged between the client and the central server is a global model parameter issued by the central server to the client, sending the target model parameter to the client or the central server may be understood as: and sending the target model parameters to the client.
In this application, through obtaining the model parameter of treating the exchange between the client that participates in federal study and the server, to the model parameter embedding covert data that treats the exchange between the client that participates in federal study and the server obtains the target model parameter, realizes that covert data is hidden in the model parameter of treating the interaction, guarantees that covert data is difficult to be perceived, on this basis to the client or the server sends the target model parameter realizes guaranteeing to carry out effectual covert communication between client and the server when guaranteeing that the source model is not influenced.
As another alternative embodiment of the present application, referring to fig. 3, a flowchart of an embodiment 2 of a covert communication method provided in the present application is provided, and this embodiment is mainly an extension of the covert communication method described in the above embodiment 1, as shown in fig. 3, the method may include, but is not limited to, the following steps:
and step S21, obtaining model parameters to be exchanged between the client and the central server in the federal learning framework.
And step S22, embedding hidden data into the model parameters to be exchanged between the client and the central server to obtain target model parameters.
And step S23, sending the target model parameters to the client or the central server.
The detailed procedures of steps S21-S23 can be found in the related descriptions of steps S11-S13 in embodiment 1, and are not repeated herein.
And step S24, extracting the hidden data from the target model parameters.
In this embodiment, the hidden data is extracted from the target model parameters, so as to achieve the acquisition of the hidden data.
It should be noted that besides the hidden data, model parameters to be exchanged between the client and the central server may also be extracted from the target model parameters.
As another optional embodiment of the present application, referring to fig. 4, which is a flowchart of embodiment 3 of a covert communication method provided in the present application, this embodiment mainly relates to a refinement of the covert communication method described in the foregoing embodiment 1, and as shown in fig. 4, the method may include, but is not limited to, the following steps:
and step S31, obtaining model parameters to be exchanged between the client and the central server in the federal learning framework.
The detailed process of step S31 can be referred to the related description of step S11 in embodiment 1, and is not repeated herein.
Step S32, the hidden data is converted into a bit string.
Step S33, respectively obtaining the submodel parameter to be processed corresponding to each bit in the bit string from the model parameter to be exchanged between the client and the central server, and processing the submodel parameter to be processed based on the value of the bit to obtain the target submodel parameter.
Obtaining the sub-model parameters to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, which can be understood as follows: and respectively selecting different model parameters from the model parameters to be exchanged between the client and the central server as the sub-model parameters to be processed corresponding to each bit in the bit string.
The processing the submodel parameter to be processed based on the value of the bit to obtain a target submodel parameter may include:
s3301, under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an odd number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an even number.
Modifying the parameters of the to-be-processed submodel may include, but is not limited to: and subtracting 1 from or adding 1 to the last bit of the sub-model parameter to be processed.
And S3302, under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an even number, taking the sub-model parameter to be processed as a target sub-model parameter.
And S3303, under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an odd number, taking the sub-model parameter to be processed as a target sub-model parameter.
S3304, if the value of the bit is 1, if the sub-model parameter to be processed is an even number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, where the target sub-model parameter is an odd number.
Modifying the parameters of the to-be-processed submodel may include, but is not limited to: and subtracting 1 or adding 1 to the last bit of the sub-model parameter to be processed.
And step S34, obtaining target model parameters based on the plurality of target sub-model parameters.
Based on the plurality of target sub-model parameters, obtaining the target model parameters can be understood as: and combining a plurality of target sub-model parameters into target model parameters.
Steps S32-S34 are a specific implementation of step S12 in example 1.
And step S35, sending the target model parameters to the client or the central server.
The detailed process of step S35 can be referred to the related description of step S13 in embodiment 1, and is not repeated here.
As another alternative embodiment of the present application, it is mainly a refinement of the covert communication method described in the above embodiment 2, and the method may include, but is not limited to, the following steps:
and S41, obtaining model parameters to be exchanged between the client and the central server in the federal learning framework.
S42, converting the concealed data into a bit string.
S43, respectively obtaining the submodel parameters to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, and processing the submodel parameters to be processed based on the value of the bit to obtain the target submodel parameters.
Obtaining the sub-model parameters to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, which can be understood as follows: and respectively selecting different model parameters from the model parameters to be exchanged between the client and the central server as the sub-model parameters to be processed corresponding to each bit in the bit string.
The processing the sub-model parameter to be processed based on the value of the bit to obtain a target sub-model parameter may include:
s4301, under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an odd number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an even number.
Modifying the parameters of the sub-model to be processed may include, but is not limited to: and subtracting 1 from or adding 1 to the last bit of the sub-model parameter to be processed.
S4302, under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an even number, taking the sub-model parameter to be processed as a target sub-model parameter.
S4303, under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an odd number, taking the sub-model parameter to be processed as a target sub-model parameter.
S4304, under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an even number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an odd number.
Modifying the parameters of the to-be-processed submodel may include, but is not limited to: and subtracting 1 from or adding 1 to the last bit of the sub-model parameter to be processed.
And step S44, obtaining target model parameters based on the plurality of target sub-model parameters.
Obtaining target model parameters based on a plurality of target sub-model parameters can be understood as: and combining a plurality of target sub-model parameters into target model parameters.
Steps S42-S44 are a specific embodiment of step S22 in example 2.
And step S45, sending the target model parameters to the client or the central server.
And step S46, extracting each target sub-model parameter from the target model parameters.
And step S47, if the target sub-model parameter is an odd number, determining that the bit embedded in the target sub-model parameter is 1.
Step S48, if the target sub-model parameter is an even number, determining that the bit embedded in the target sub-model parameter is 0.
Step S49, obtaining the hidden data based on the bits embedded in the plurality of target sub-model parameters.
Obtaining the hidden data based on the bits embedded in the plurality of target sub-model parameters may be understood as: and forming bit strings by the bits embedded in the target sub-model parameters, and converting the bit strings to obtain hidden data.
In this embodiment, the model parameter to be exchanged may be any one or both of a weight and a bias.
For example, in the neural network model of the federal learning framework, each neuron has a weight parameter and a bias parameter (gradient information) corresponding to each node connected to the neuron on the lower layer, and the model parameter exchanged between the client and the central server in the federal learning framework is any one or more of the weight and the bias. Order to
Figure GDA0003661541270000091
Representing the original model parameters (including weights and offsets), n being the number x of model parameters i E.g. range (var), where var is the variable in the specific machine learning framework and range (var) is the range of the variable, in the pytorch, the parameters of the model are generally calculated and stored in the form of floating point number, so that
Figure GDA0003661541270000101
Representing the model parameters after embedding the concealed data, the data embedding and data extraction process can be represented as:
Y=Embed(X)
X=Extract(Y)
the following examples of the embedding method are shown in FIG. 5, which is a neural network model with only one input layer and one output layer, and the output o of the neural network model 1 Can be calculated by the following formula:
o 1 =x 1 ×w 11 ++x 2 ×w 21 +…+x n ×w n1 +b 1
wherein w i,j And b i,j Is a neuron n i Output o to the model j Weight and bias of (1), same principle as 2 o 3 .., calculated in the same way, so this neural network model has a total of 748 × 10 × 2 parameter values. These parameter values are stored in the neural network as floating point numbers, e.g., the weight of a 5 × 5 convolution kernel, which can be expressed as:
[[-0.088092 0.082623 0.18617 0.254095 -0.01022], [0.092659 0.17915 0.199381 -0.046362 0.109278], [-0.052475 -0.157151 -0.108171 0.110861 -0.073945], [-0.068187 0.091944 -0.074742 0.048471 -0.147859], [-0.222414 0.175914 0.183104 0.139022 -0.094722]]
the magnitude of the weights may represent the contribution of the current neuron to the network output, which may have a negligible effect on the neural network model if these weights are slightly modified. Examples are as follows: if x 1 =1,w 1,1 0.088092, and the neural network model output is:
o 1 =x 1 ×w 11 +x 2 ×w 21 +…+x n ×w n1 +b 1
if w is modified because of the data embedding operation 1,1 0.088093, then the output of the neural network model is:
o′ 1 =1×(088092+0.000001)+(x 2 ×w 21 +…+x n ×w n1 +b 1 )=0.98+0.00001*1≈0.98
it can be seen from the above example that, by embedding 1bit of information, only a disturbance of 0.000001 is added to the output of the model, and this disturbance can be ignored, and in reality, the parameter amount of a machine learning model is often very large, so the channel capacity of the covert communication method proposed by the present application is very considerable.
Steps S46-S49 are a specific embodiment of step S24 in example 2.
Next, a covert communication device provided in the present application will be described, and the covert communication device described below and the covert communication method described above may be referred to in correspondence.
Referring to fig. 6, the covert communication device comprises: an acquisition module 100, an embedding module 200 and a sending module 300.
An obtaining module 100, configured to obtain a model parameter to be exchanged between a client and a central server in a federated learning framework;
an embedding module 200, configured to embed hidden data into the model parameter to be exchanged between the client and the central server, so as to obtain a target model parameter;
a sending module 300, configured to send the target model parameter to the client or the central server.
In this embodiment, the apparatus may further include:
and the extraction module is used for extracting the hidden data from the target model parameters.
In this embodiment, the embedding module 200 may be specifically configured to:
converting the concealed data into a bit string;
respectively obtaining a submodel parameter to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, and processing the submodel parameter to be processed based on the value of the bit to obtain a target submodel parameter;
and obtaining target model parameters based on the plurality of target sub-model parameters.
In this embodiment, the embedding module 200 may be specifically configured to:
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an odd number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an even number;
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an even number, taking the sub-model parameter to be processed as a target sub-model parameter;
under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an odd number, taking the sub-model parameter to be processed as a target sub-model parameter;
and under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an even number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an odd number.
In this embodiment, the extraction module may be specifically configured to:
extracting each target sub-model parameter from the target model parameters;
if the target sub-model parameter is an odd number, determining that a bit embedded in the target sub-model parameter is 1;
if the target sub-model parameter is an even number, determining that a bit embedded in the target sub-model parameter is 0;
and obtaining the hidden data based on the embedded bits in the target sub-model parameters.
It should be noted that each embodiment is mainly described as a difference from the other embodiments, and the same and similar parts between the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The foregoing detailed description is directed to a covert communication method and apparatus provided by the present application, and specific examples are used herein to illustrate the principles and implementations of the present application, and the descriptions of the foregoing examples are only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (4)

1. A covert communication method, comprising:
obtaining model parameters to be exchanged between a client and a central server in a federated learning framework;
embedding hidden data into the model parameters to be exchanged between the client and the central server to obtain target model parameters;
sending the target model parameters to the client or the central server;
extracting the hidden data from the target model parameters;
the embedding of the hidden data into the model parameters to be exchanged between the client and the central server to obtain the target model parameters comprises:
converting the concealed data into a bit string;
respectively obtaining a submodel parameter to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, and processing the submodel parameter to be processed based on the value of the bit to obtain a target submodel parameter;
obtaining target model parameters based on a plurality of target sub-model parameters;
the processing the sub-model parameter to be processed based on the value of the bit to obtain a target sub-model parameter, including:
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an odd number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an even number;
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an even number, taking the sub-model parameter to be processed as a target sub-model parameter;
under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an odd number, taking the sub-model parameter to be processed as a target sub-model parameter;
and under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an even number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an odd number.
2. The method of claim 1, wherein said extracting the hidden data from the object model parameters comprises:
extracting each target sub-model parameter from the target model parameters;
if the target sub-model parameter is an odd number, determining that the bit embedded in the target sub-model parameter is 1;
if the target sub-model parameter is an even number, determining that a bit embedded in the target sub-model parameter is 0;
and obtaining the hidden data based on the embedded bits in the target sub-model parameters.
3. A covert communication device, comprising:
the acquisition module is used for acquiring model parameters to be exchanged between the client and the central server in the federated learning framework;
the embedding module is used for embedding hidden data into the model parameters to be exchanged between the client and the central server to obtain target model parameters;
a sending module, configured to send the target model parameter to the client or the central server;
an extraction module for extracting the hidden data from the target model parameters;
the embedded module is specifically configured to:
converting the concealed data into a bit string;
respectively obtaining a submodel parameter to be processed corresponding to each bit in the bit string from the model parameters to be exchanged between the client and the central server, and processing the submodel parameter to be processed based on the value of the bit to obtain a target submodel parameter;
obtaining target model parameters based on a plurality of target sub-model parameters;
the embedded module is specifically configured to:
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an odd number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an even number;
under the condition that the value of the bit is 0, if the sub-model parameter to be processed is an even number, taking the sub-model parameter to be processed as a target sub-model parameter;
under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an odd number, taking the sub-model parameter to be processed as a target sub-model parameter;
and under the condition that the value of the bit is 1, if the sub-model parameter to be processed is an even number, modifying the sub-model parameter to be processed to obtain a target sub-model parameter, wherein the target sub-model parameter is an odd number.
4. The apparatus according to claim 3, wherein the extraction module is specifically configured to:
extracting each target sub-model parameter from the target model parameters;
if the target sub-model parameter is an odd number, determining that a bit embedded in the target sub-model parameter is 1;
if the target sub-model parameter is an even number, determining that the bit embedded in the target sub-model parameter is 0;
and obtaining the hidden data based on the embedded bits in the target sub-model parameters.
CN202110577507.3A 2021-05-26 2021-05-26 Covert communication method and device Active CN113259363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110577507.3A CN113259363B (en) 2021-05-26 2021-05-26 Covert communication method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110577507.3A CN113259363B (en) 2021-05-26 2021-05-26 Covert communication method and device

Publications (2)

Publication Number Publication Date
CN113259363A CN113259363A (en) 2021-08-13
CN113259363B true CN113259363B (en) 2022-09-02

Family

ID=77184773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110577507.3A Active CN113259363B (en) 2021-05-26 2021-05-26 Covert communication method and device

Country Status (1)

Country Link
CN (1) CN113259363B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111428881A (en) * 2020-03-20 2020-07-17 深圳前海微众银行股份有限公司 Recognition model training method, device, equipment and readable storage medium
CN111477290A (en) * 2020-03-05 2020-07-31 上海交通大学 Federal learning and image classification method, system and terminal for protecting user privacy
CN111582508A (en) * 2020-04-09 2020-08-25 上海淇毓信息科技有限公司 Strategy making method and device based on federated learning framework and electronic equipment
CN111898137A (en) * 2020-06-30 2020-11-06 深圳致星科技有限公司 Private data processing method, equipment and system for federated learning
CN112200713A (en) * 2020-10-28 2021-01-08 支付宝(杭州)信息技术有限公司 Business data processing method, device and equipment in federated learning
CN112257105A (en) * 2020-10-19 2021-01-22 中山大学 Federal learning method and system based on parameter replacement algorithm
CN112464290A (en) * 2020-12-17 2021-03-09 浙江工业大学 Vertical federal learning defense method based on self-encoder

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11593634B2 (en) * 2018-06-19 2023-02-28 Adobe Inc. Asynchronously training machine learning models across client devices for adaptive intelligence
WO2020216875A1 (en) * 2019-04-23 2020-10-29 Onespan Nv Methods and systems for privacy preserving evaluation of machine learning models
CN111539731A (en) * 2020-06-19 2020-08-14 支付宝(杭州)信息技术有限公司 Block chain-based federal learning method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111477290A (en) * 2020-03-05 2020-07-31 上海交通大学 Federal learning and image classification method, system and terminal for protecting user privacy
CN111428881A (en) * 2020-03-20 2020-07-17 深圳前海微众银行股份有限公司 Recognition model training method, device, equipment and readable storage medium
CN111582508A (en) * 2020-04-09 2020-08-25 上海淇毓信息科技有限公司 Strategy making method and device based on federated learning framework and electronic equipment
CN111898137A (en) * 2020-06-30 2020-11-06 深圳致星科技有限公司 Private data processing method, equipment and system for federated learning
CN112257105A (en) * 2020-10-19 2021-01-22 中山大学 Federal learning method and system based on parameter replacement algorithm
CN112200713A (en) * 2020-10-28 2021-01-08 支付宝(杭州)信息技术有限公司 Business data processing method, device and equipment in federated learning
CN112464290A (en) * 2020-12-17 2021-03-09 浙江工业大学 Vertical federal learning defense method based on self-encoder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨庚 ; 王周生.联邦学习中的隐私保护研究进展.《 南京邮电大学学报(自然科学版)》.2020, *
联邦学习研究综述;周传鑫,孙奕,汪德刚,葛桦玮;《网络与信息安全学报》;20210325;全文 *

Also Published As

Publication number Publication date
CN113259363A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
Liu et al. From distributed machine learning to federated learning: A survey
Wang et al. A BP neural network model optimized by mind evolutionary algorithm for predicting the ocean wave heights
CN109346063B (en) Voice data enhancement method
US20200184316A1 (en) Generating discrete latent representations of input data items
CN105389770B (en) Embedded, extracting method and device based on BP and the image watermark of RBF neural
CN112508118B (en) Target object behavior prediction method aiming at data offset and related equipment thereof
CN108205674A (en) Content identification method, electronic equipment, storage medium and the system of social APP
Li et al. A generative steganography method based on wgan-gp
CN106487856A (en) A kind of method and system of network file storage
CN106355191A (en) Deep generating network random training algorithm and device
CN113259363B (en) Covert communication method and device
CN111582284A (en) Privacy protection method and device for image recognition and electronic equipment
Lengyel et al. Online social networks, location, and the dual effect of distance from the centre
Xiao Compensation method of electronic commerce data transmission delay based on fuzzy encryption algorithm
WO2022178970A1 (en) Speech noise reducer training method and apparatus, and computer device and storage medium
Han et al. To What Extent do Neighbouring Populations Affect Local Population Growth Over Time?
TWI812293B (en) Fedrated learning system and method using data digest
Rakotondrazafy et al. Developing a common definition for LMMAs in Madagascar
Lian et al. Traffic Sign Recognition using Optimized Federated Learning in Internet of Vehicles
Zhou et al. Latent Vector Optimization-Based Generative Image Steganography for Consumer Electronic Applications
Nematollahi A machine learning approach for digital watermarking
Chopra et al. Image steganography using edge detection technique
US20220405572A1 (en) Methods for converting hierarchical data
CN117395038A (en) Safety enhancement method, system, equipment and medium of MCU federal learning framework
CN117273163A (en) Federal learning system using data summary and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant